<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[196756] trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/196756">196756</a></dd>
<dt>Author</dt> <dd>fpizlo@apple.com</dd>
<dt>Date</dt> <dd>2016-02-18 08:40:25 -0800 (Thu, 18 Feb 2016)</dd>
</dl>

<h3>Log Message</h3>
<pre>Remove remaining references to LLVM, and make sure comments refer to the backend as &quot;B3&quot; not &quot;LLVM&quot;
https://bugs.webkit.org/show_bug.cgi?id=154383

Reviewed by Saam Barati.

Source/JavaScriptCore:

I did a grep -i llvm of all of our code and did one of the following for each occurence:

- Renamed it to B3. This is appropriate when we were using &quot;LLVM&quot; to mean &quot;the FTL
  backend&quot;.

- Removed the reference because I found it to be dead. In some cases it was a dead
  comment: it was telling us things about what LLVM did and that's just not relevant
  anymore. In other cases it was dead code that I forgot to delete in a previous patch.

- Edited the comment in some smart way. There were comments talking about what LLVM did
  that were still of interest. In some cases, I added a FIXME to consider changing the
  code below the comment on the grounds that it was written in a weird way to placate
  LLVM and so we can do it better now.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGOSRAvailabilityAnalysisPhase.h:
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThread):
(JSC::DFG::Plan::compileInThreadImpl):
(JSC::DFG::Plan::compileTimeStats):
* dfg/DFGPutStackSinkingPhase.cpp:
* dfg/DFGSSAConversionPhase.h:
* dfg/DFGStaticExecutionCountEstimationPhase.h:
* dfg/DFGUnificationPhase.cpp:
(JSC::DFG::UnificationPhase::run):
* disassembler/ARM64Disassembler.cpp:
(JSC::tryToDisassemble): Deleted.
* disassembler/X86Disassembler.cpp:
(JSC::tryToDisassemble):
* ftl/FTLAbstractHeap.cpp:
(JSC::FTL::IndexedAbstractHeap::initialize):
* ftl/FTLAbstractHeap.h:
* ftl/FTLFormattedValue.h:
* ftl/FTLJITFinalizer.cpp:
(JSC::FTL::JITFinalizer::finalizeFunction):
* ftl/FTLLink.cpp:
(JSC::FTL::link):
* ftl/FTLLocation.cpp:
(JSC::FTL::Location::restoreInto):
* ftl/FTLLowerDFGToB3.cpp: Copied from Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp.
(JSC::FTL::DFG::ftlUnreachable):
(JSC::FTL::DFG::LowerDFGToB3::LowerDFGToB3):
(JSC::FTL::DFG::LowerDFGToB3::compileBlock):
(JSC::FTL::DFG::LowerDFGToB3::compileArithNegate):
(JSC::FTL::DFG::LowerDFGToB3::compileMultiGetByOffset):
(JSC::FTL::DFG::LowerDFGToB3::compileOverridesHasInstance):
(JSC::FTL::DFG::LowerDFGToB3::isBoolean):
(JSC::FTL::DFG::LowerDFGToB3::unboxBoolean):
(JSC::FTL::DFG::LowerDFGToB3::emitStoreBarrier):
(JSC::FTL::lowerDFGToB3):
(JSC::FTL::DFG::LowerDFGToLLVM::LowerDFGToLLVM): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::compileBlock): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::compileArithNegate): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::compileMultiGetByOffset): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::compileOverridesHasInstance): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::isBoolean): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::unboxBoolean): Deleted.
(JSC::FTL::DFG::LowerDFGToLLVM::emitStoreBarrier): Deleted.
(JSC::FTL::lowerDFGToLLVM): Deleted.
* ftl/FTLLowerDFGToB3.h: Copied from Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h.
* ftl/FTLLowerDFGToLLVM.cpp: Removed.
* ftl/FTLLowerDFGToLLVM.h: Removed.
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
* ftl/FTLWeight.h:
(JSC::FTL::Weight::frequencyClass):
(JSC::FTL::Weight::inverse):
(JSC::FTL::Weight::scaleToTotal): Deleted.
* ftl/FTLWeightedTarget.h:
(JSC::FTL::rarely):
(JSC::FTL::unsure):
* jit/CallFrameShuffler64.cpp:
(JSC::CallFrameShuffler::emitDisplace):
* jit/RegisterSet.cpp:
(JSC::RegisterSet::ftlCalleeSaveRegisters):
* llvm: Removed.
* llvm/InitializeLLVMLinux.cpp: Removed.
* llvm/InitializeLLVMWin.cpp: Removed.
* llvm/library: Removed.
* llvm/library/LLVMTrapCallback.h: Removed.
* llvm/library/libllvmForJSC.version: Removed.
* runtime/Options.cpp:
(JSC::recomputeDependentOptions):
(JSC::Options::initialize):
* runtime/Options.h:
* wasm/WASMFunctionB3IRGenerator.h: Copied from Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h.
* wasm/WASMFunctionLLVMIRGenerator.h: Removed.
* wasm/WASMFunctionParser.cpp:

Tools:

* Scripts/run-jsc-stress-tests:</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreChangeLog">trunk/Source/JavaScriptCore/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj">trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGArgumentsEliminationPhasecpp">trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGOSRAvailabilityAnalysisPhaseh">trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.h</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGPlancpp">trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGPutStackSinkingPhasecpp">trunk/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGSSAConversionPhaseh">trunk/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.h</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGStaticExecutionCountEstimationPhaseh">trunk/Source/JavaScriptCore/dfg/DFGStaticExecutionCountEstimationPhase.h</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGUnificationPhasecpp">trunk/Source/JavaScriptCore/dfg/DFGUnificationPhase.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredisassemblerARM64Disassemblercpp">trunk/Source/JavaScriptCore/disassembler/ARM64Disassembler.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredisassemblerX86Disassemblercpp">trunk/Source/JavaScriptCore/disassembler/X86Disassembler.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLAbstractHeapcpp">trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLAbstractHeaph">trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLFormattedValueh">trunk/Source/JavaScriptCore/ftl/FTLFormattedValue.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLJITFinalizercpp">trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLinkcpp">trunk/Source/JavaScriptCore/ftl/FTLLink.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLocationcpp">trunk/Source/JavaScriptCore/ftl/FTLLocation.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLOSRExitCompilercpp">trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLWeighth">trunk/Source/JavaScriptCore/ftl/FTLWeight.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLWeightedTargeth">trunk/Source/JavaScriptCore/ftl/FTLWeightedTarget.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitCallFrameShuffler64cpp">trunk/Source/JavaScriptCore/jit/CallFrameShuffler64.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorejitRegisterSetcpp">trunk/Source/JavaScriptCore/jit/RegisterSet.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreruntimeOptionscpp">trunk/Source/JavaScriptCore/runtime/Options.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreruntimeOptionsh">trunk/Source/JavaScriptCore/runtime/Options.h</a></li>
<li><a href="#trunkSourceJavaScriptCorewasmWASMFunctionParsercpp">trunk/Source/JavaScriptCore/wasm/WASMFunctionParser.cpp</a></li>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
<li><a href="#trunkToolsScriptsrunjscstresstests">trunk/Tools/Scripts/run-jsc-stress-tests</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLowerDFGToB3cpp">trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLowerDFGToB3h">trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.h</a></li>
<li><a href="#trunkSourceJavaScriptCorewasmWASMFunctionB3IRGeneratorh">trunk/Source/JavaScriptCore/wasm/WASMFunctionB3IRGenerator.h</a></li>
</ul>

<h3>Removed Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMcpp">trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMh">trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h</a></li>
<li>trunk/Source/JavaScriptCore/llvm/</li>
<li><a href="#trunkSourceJavaScriptCorewasmWASMFunctionLLVMIRGeneratorh">trunk/Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceJavaScriptCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ChangeLog (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ChangeLog        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ChangeLog        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -1,3 +1,101 @@
</span><ins>+2016-02-17  Filip Pizlo  &lt;fpizlo@apple.com&gt;
+
+        Remove remaining references to LLVM, and make sure comments refer to the backend as &quot;B3&quot; not &quot;LLVM&quot;
+        https://bugs.webkit.org/show_bug.cgi?id=154383
+
+        Reviewed by Saam Barati.
+
+        I did a grep -i llvm of all of our code and did one of the following for each occurence:
+
+        - Renamed it to B3. This is appropriate when we were using &quot;LLVM&quot; to mean &quot;the FTL
+          backend&quot;.
+
+        - Removed the reference because I found it to be dead. In some cases it was a dead
+          comment: it was telling us things about what LLVM did and that's just not relevant
+          anymore. In other cases it was dead code that I forgot to delete in a previous patch.
+
+        - Edited the comment in some smart way. There were comments talking about what LLVM did
+          that were still of interest. In some cases, I added a FIXME to consider changing the
+          code below the comment on the grounds that it was written in a weird way to placate
+          LLVM and so we can do it better now.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGOSRAvailabilityAnalysisPhase.h:
+        * dfg/DFGPlan.cpp:
+        (JSC::DFG::Plan::compileInThread):
+        (JSC::DFG::Plan::compileInThreadImpl):
+        (JSC::DFG::Plan::compileTimeStats):
+        * dfg/DFGPutStackSinkingPhase.cpp:
+        * dfg/DFGSSAConversionPhase.h:
+        * dfg/DFGStaticExecutionCountEstimationPhase.h:
+        * dfg/DFGUnificationPhase.cpp:
+        (JSC::DFG::UnificationPhase::run):
+        * disassembler/ARM64Disassembler.cpp:
+        (JSC::tryToDisassemble): Deleted.
+        * disassembler/X86Disassembler.cpp:
+        (JSC::tryToDisassemble):
+        * ftl/FTLAbstractHeap.cpp:
+        (JSC::FTL::IndexedAbstractHeap::initialize):
+        * ftl/FTLAbstractHeap.h:
+        * ftl/FTLFormattedValue.h:
+        * ftl/FTLJITFinalizer.cpp:
+        (JSC::FTL::JITFinalizer::finalizeFunction):
+        * ftl/FTLLink.cpp:
+        (JSC::FTL::link):
+        * ftl/FTLLocation.cpp:
+        (JSC::FTL::Location::restoreInto):
+        * ftl/FTLLowerDFGToB3.cpp: Copied from Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp.
+        (JSC::FTL::DFG::ftlUnreachable):
+        (JSC::FTL::DFG::LowerDFGToB3::LowerDFGToB3):
+        (JSC::FTL::DFG::LowerDFGToB3::compileBlock):
+        (JSC::FTL::DFG::LowerDFGToB3::compileArithNegate):
+        (JSC::FTL::DFG::LowerDFGToB3::compileMultiGetByOffset):
+        (JSC::FTL::DFG::LowerDFGToB3::compileOverridesHasInstance):
+        (JSC::FTL::DFG::LowerDFGToB3::isBoolean):
+        (JSC::FTL::DFG::LowerDFGToB3::unboxBoolean):
+        (JSC::FTL::DFG::LowerDFGToB3::emitStoreBarrier):
+        (JSC::FTL::lowerDFGToB3):
+        (JSC::FTL::DFG::LowerDFGToLLVM::LowerDFGToLLVM): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileBlock): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileArithNegate): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileMultiGetByOffset): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileOverridesHasInstance): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::isBoolean): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::unboxBoolean): Deleted.
+        (JSC::FTL::DFG::LowerDFGToLLVM::emitStoreBarrier): Deleted.
+        (JSC::FTL::lowerDFGToLLVM): Deleted.
+        * ftl/FTLLowerDFGToB3.h: Copied from Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h.
+        * ftl/FTLLowerDFGToLLVM.cpp: Removed.
+        * ftl/FTLLowerDFGToLLVM.h: Removed.
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileStub):
+        * ftl/FTLWeight.h:
+        (JSC::FTL::Weight::frequencyClass):
+        (JSC::FTL::Weight::inverse):
+        (JSC::FTL::Weight::scaleToTotal): Deleted.
+        * ftl/FTLWeightedTarget.h:
+        (JSC::FTL::rarely):
+        (JSC::FTL::unsure):
+        * jit/CallFrameShuffler64.cpp:
+        (JSC::CallFrameShuffler::emitDisplace):
+        * jit/RegisterSet.cpp:
+        (JSC::RegisterSet::ftlCalleeSaveRegisters):
+        * llvm: Removed.
+        * llvm/InitializeLLVMLinux.cpp: Removed.
+        * llvm/InitializeLLVMWin.cpp: Removed.
+        * llvm/library: Removed.
+        * llvm/library/LLVMTrapCallback.h: Removed.
+        * llvm/library/libllvmForJSC.version: Removed.
+        * runtime/Options.cpp:
+        (JSC::recomputeDependentOptions):
+        (JSC::Options::initialize):
+        * runtime/Options.h:
+        * wasm/WASMFunctionB3IRGenerator.h: Copied from Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h.
+        * wasm/WASMFunctionLLVMIRGenerator.h: Removed.
+        * wasm/WASMFunctionParser.cpp:
+
</ins><span class="cx"> 2016-02-18  Csaba Osztrogonác  &lt;ossy@webkit.org&gt;
</span><span class="cx"> 
</span><span class="cx">         [cmake] Build system cleanup
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -732,8 +732,8 @@
</span><span class="cx">                 0FEA0A0C170513DB00BB722C /* FTLCompile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEA0A01170513DB00BB722C /* FTLCompile.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 0FEA0A0D170513DB00BB722C /* FTLJITCode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEA0A02170513DB00BB722C /* FTLJITCode.cpp */; };
</span><span class="cx">                 0FEA0A0E170513DB00BB722C /* FTLJITCode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEA0A03170513DB00BB722C /* FTLJITCode.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><del>-                0FEA0A0F170513DB00BB722C /* FTLLowerDFGToLLVM.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEA0A04170513DB00BB722C /* FTLLowerDFGToLLVM.cpp */; };
-                0FEA0A10170513DB00BB722C /* FTLLowerDFGToLLVM.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEA0A05170513DB00BB722C /* FTLLowerDFGToLLVM.h */; settings = {ATTRIBUTES = (Private, ); }; };
</del><ins>+                0FEA0A0F170513DB00BB722C /* FTLLowerDFGToB3.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEA0A04170513DB00BB722C /* FTLLowerDFGToB3.cpp */; };
+                0FEA0A10170513DB00BB722C /* FTLLowerDFGToB3.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEA0A05170513DB00BB722C /* FTLLowerDFGToB3.h */; settings = {ATTRIBUTES = (Private, ); }; };
</ins><span class="cx">                 0FEA0A12170513DB00BB722C /* FTLState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEA0A07170513DB00BB722C /* FTLState.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 0FEA0A161706BB9000BB722C /* FTLState.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEA0A151706BB9000BB722C /* FTLState.cpp */; };
</span><span class="cx">                 0FEA0A1C1708B00700BB722C /* FTLAbstractHeap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEA0A171708B00700BB722C /* FTLAbstractHeap.cpp */; };
</span><span class="lines">@@ -1296,7 +1296,7 @@
</span><span class="cx">                 7B39F76E1B62DE3200360FB4 /* WASMModuleParser.h in Headers */ = {isa = PBXBuildFile; fileRef = 7B39F76A1B62DE2200360FB4 /* WASMModuleParser.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 7B39F7701B62DE3200360FB4 /* WASMReader.h in Headers */ = {isa = PBXBuildFile; fileRef = 7B39F76C1B62DE2200360FB4 /* WASMReader.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 7B39F7721B63574D00360FB4 /* WASMReader.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7B39F7711B63574B00360FB4 /* WASMReader.cpp */; };
</span><del>-                7B8329BF1BB21FE300649A6E /* WASMFunctionLLVMIRGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 7B8329BE1BB21FD100649A6E /* WASMFunctionLLVMIRGenerator.h */; settings = {ATTRIBUTES = (Private, ); }; };
</del><ins>+                7B8329BF1BB21FE300649A6E /* WASMFunctionB3IRGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 7B8329BE1BB21FD100649A6E /* WASMFunctionB3IRGenerator.h */; settings = {ATTRIBUTES = (Private, ); }; };
</ins><span class="cx">                 7B98D1361B60CD5F0023B1A4 /* JSWASMModule.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7B98D1341B60CD5A0023B1A4 /* JSWASMModule.cpp */; };
</span><span class="cx">                 7B98D1371B60CD620023B1A4 /* JSWASMModule.h in Headers */ = {isa = PBXBuildFile; fileRef = 7B98D1351B60CD5A0023B1A4 /* JSWASMModule.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 7BC547D31B6959A100959B58 /* WASMFormat.h in Headers */ = {isa = PBXBuildFile; fileRef = 7BC547D21B69599B00959B58 /* WASMFormat.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="lines">@@ -2886,8 +2886,8 @@
</span><span class="cx">                 0FEA0A01170513DB00BB722C /* FTLCompile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLCompile.h; path = ftl/FTLCompile.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FEA0A02170513DB00BB722C /* FTLJITCode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLJITCode.cpp; path = ftl/FTLJITCode.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FEA0A03170513DB00BB722C /* FTLJITCode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLJITCode.h; path = ftl/FTLJITCode.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><del>-                0FEA0A04170513DB00BB722C /* FTLLowerDFGToLLVM.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLowerDFGToLLVM.cpp; path = ftl/FTLLowerDFGToLLVM.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
-                0FEA0A05170513DB00BB722C /* FTLLowerDFGToLLVM.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLowerDFGToLLVM.h; path = ftl/FTLLowerDFGToLLVM.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</del><ins>+                0FEA0A04170513DB00BB722C /* FTLLowerDFGToB3.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLowerDFGToB3.cpp; path = ftl/FTLLowerDFGToB3.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
+                0FEA0A05170513DB00BB722C /* FTLLowerDFGToB3.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLowerDFGToB3.h; path = ftl/FTLLowerDFGToB3.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</ins><span class="cx">                 0FEA0A07170513DB00BB722C /* FTLState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLState.h; path = ftl/FTLState.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FEA0A151706BB9000BB722C /* FTLState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLState.cpp; path = ftl/FTLState.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FEA0A171708B00700BB722C /* FTLAbstractHeap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLAbstractHeap.cpp; path = ftl/FTLAbstractHeap.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="lines">@@ -3449,7 +3449,7 @@
</span><span class="cx">                 7B39F76A1B62DE2200360FB4 /* WASMModuleParser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMModuleParser.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 7B39F76C1B62DE2200360FB4 /* WASMReader.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMReader.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 7B39F7711B63574B00360FB4 /* WASMReader.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WASMReader.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><del>-                7B8329BE1BB21FD100649A6E /* WASMFunctionLLVMIRGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMFunctionLLVMIRGenerator.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</del><ins>+                7B8329BE1BB21FD100649A6E /* WASMFunctionB3IRGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMFunctionB3IRGenerator.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</ins><span class="cx">                 7B98D1341B60CD5A0023B1A4 /* JSWASMModule.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSWASMModule.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 7B98D1351B60CD5A0023B1A4 /* JSWASMModule.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSWASMModule.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 7BC547D21B69599B00959B58 /* WASMFormat.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMFormat.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="lines">@@ -4555,8 +4555,8 @@
</span><span class="cx">                                 0F8F2B94172E049E007DBDA5 /* FTLLink.h */,
</span><span class="cx">                                 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */,
</span><span class="cx">                                 0FCEFADE180738C000472CE4 /* FTLLocation.h */,
</span><del>-                                0FEA0A04170513DB00BB722C /* FTLLowerDFGToLLVM.cpp */,
-                                0FEA0A05170513DB00BB722C /* FTLLowerDFGToLLVM.h */,
</del><ins>+                                0FEA0A04170513DB00BB722C /* FTLLowerDFGToB3.cpp */,
+                                0FEA0A05170513DB00BB722C /* FTLLowerDFGToB3.h */,
</ins><span class="cx">                                 A7D89D0117A0B90400773AD8 /* FTLLoweredNodeValue.h */,
</span><span class="cx">                                 0F2B9CF219D0BAC100B1D1B5 /* FTLOperations.cpp */,
</span><span class="cx">                                 0F2B9CF319D0BAC100B1D1B5 /* FTLOperations.h */,
</span><span class="lines">@@ -5374,7 +5374,7 @@
</span><span class="cx">                                 7B0247521B8682D500542440 /* WASMConstants.h */,
</span><span class="cx">                                 7BC547D21B69599B00959B58 /* WASMFormat.h */,
</span><span class="cx">                                 7B2E010D1B97AA5800EF5D5C /* WASMFunctionCompiler.h */,
</span><del>-                                7B8329BE1BB21FD100649A6E /* WASMFunctionLLVMIRGenerator.h */,
</del><ins>+                                7B8329BE1BB21FD100649A6E /* WASMFunctionB3IRGenerator.h */,
</ins><span class="cx">                                 7B0247531B8682D500542440 /* WASMFunctionParser.cpp */,
</span><span class="cx">                                 7B0247541B8682D500542440 /* WASMFunctionParser.h */,
</span><span class="cx">                                 7B0247581B868EAE00542440 /* WASMFunctionSyntaxChecker.h */,
</span><span class="lines">@@ -7354,7 +7354,7 @@
</span><span class="cx">                                 0FB4FB751BC843140025CA5A /* FTLLazySlowPathCall.h in Headers */,
</span><span class="cx">                                 0F8F2B96172E04A3007DBDA5 /* FTLLink.h in Headers */,
</span><span class="cx">                                 0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */,
</span><del>-                                0FEA0A10170513DB00BB722C /* FTLLowerDFGToLLVM.h in Headers */,
</del><ins>+                                0FEA0A10170513DB00BB722C /* FTLLowerDFGToB3.h in Headers */,
</ins><span class="cx">                                 A7D89D0217A0B90400773AD8 /* FTLLoweredNodeValue.h in Headers */,
</span><span class="cx">                                 0F2B9CF919D0BAC100B1D1B5 /* FTLOperations.h in Headers */,
</span><span class="cx">                                 0F338E101BF0276C0013C88F /* B3MoveConstants.h in Headers */,
</span><span class="lines">@@ -7991,7 +7991,7 @@
</span><span class="cx">                                 7B0247551B8682DD00542440 /* WASMConstants.h in Headers */,
</span><span class="cx">                                 7BC547D31B6959A100959B58 /* WASMFormat.h in Headers */,
</span><span class="cx">                                 7B2E010E1B97AA6900EF5D5C /* WASMFunctionCompiler.h in Headers */,
</span><del>-                                7B8329BF1BB21FE300649A6E /* WASMFunctionLLVMIRGenerator.h in Headers */,
</del><ins>+                                7B8329BF1BB21FE300649A6E /* WASMFunctionB3IRGenerator.h in Headers */,
</ins><span class="cx">                                 7B0247571B8682E400542440 /* WASMFunctionParser.h in Headers */,
</span><span class="cx">                                 0F6B8ADD1C4EFAC300969052 /* B3SSACalculator.h in Headers */,
</span><span class="cx">                                 7B0247591B868EB700542440 /* WASMFunctionSyntaxChecker.h in Headers */,
</span><span class="lines">@@ -8880,7 +8880,7 @@
</span><span class="cx">                                 0FB4FB731BC843140025CA5A /* FTLLazySlowPath.cpp in Sources */,
</span><span class="cx">                                 0F8F2B95172E04A0007DBDA5 /* FTLLink.cpp in Sources */,
</span><span class="cx">                                 0FCEFADF180738C000472CE4 /* FTLLocation.cpp in Sources */,
</span><del>-                                0FEA0A0F170513DB00BB722C /* FTLLowerDFGToLLVM.cpp in Sources */,
</del><ins>+                                0FEA0A0F170513DB00BB722C /* FTLLowerDFGToB3.cpp in Sources */,
</ins><span class="cx">                                 0F2B9CF819D0BAC100B1D1B5 /* FTLOperations.cpp in Sources */,
</span><span class="cx">                                 0FD8A31B17D51F2200CA2C40 /* FTLOSREntry.cpp in Sources */,
</span><span class="cx">                                 0F235BDC17178E1C00690C7F /* FTLOSRExit.cpp in Sources */,
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGArgumentsEliminationPhasecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -428,7 +428,7 @@
</span><span class="cx">                     // FIXME: For ClonedArguments, we would have already done a separate bounds check.
</span><span class="cx">                     // This code will cause us to have two bounds checks - the original one that we
</span><span class="cx">                     // already factored out in SSALoweringPhase, and the new one we insert here, which is
</span><del>-                    // often implicitly part of GetMyArgumentByVal. LLVM will probably eliminate the
</del><ins>+                    // often implicitly part of GetMyArgumentByVal. B3 will probably eliminate the
</ins><span class="cx">                     // second bounds check, but still - that's just silly.
</span><span class="cx">                     // https://bugs.webkit.org/show_bug.cgi?id=143076
</span><span class="cx">                     
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGOSRAvailabilityAnalysisPhaseh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -37,7 +37,7 @@
</span><span class="cx"> 
</span><span class="cx"> // Computes BasicBlock::ssa-&gt;availabiltiyAtHead/Tail. This is a forward flow type inference
</span><span class="cx"> // over MovHints and SetLocals. This analysis is run directly by the Plan for preparing for
</span><del>-// lowering to LLVM IR, but it can also be used as a utility. Note that if you run it before
</del><ins>+// lowering to B3 IR, but it can also be used as a utility. Note that if you run it before
</ins><span class="cx"> // stack layout, all of the flush availability will omit the virtual register - but it will
</span><span class="cx"> // tell you the format.
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGPlancpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -87,7 +87,7 @@
</span><span class="cx"> #include &quot;FTLCompile.h&quot;
</span><span class="cx"> #include &quot;FTLFail.h&quot;
</span><span class="cx"> #include &quot;FTLLink.h&quot;
</span><del>-#include &quot;FTLLowerDFGToLLVM.h&quot;
</del><ins>+#include &quot;FTLLowerDFGToB3.h&quot;
</ins><span class="cx"> #include &quot;FTLState.h&quot;
</span><span class="cx"> #endif
</span><span class="cx"> 
</span><span class="lines">@@ -98,7 +98,7 @@
</span><span class="cx"> double totalDFGCompileTime;
</span><span class="cx"> double totalFTLCompileTime;
</span><span class="cx"> double totalFTLDFGCompileTime;
</span><del>-double totalFTLLLVMCompileTime;
</del><ins>+double totalFTLB3CompileTime;
</ins><span class="cx"> 
</span><span class="cx"> void dumpAndVerifyGraph(Graph&amp; graph, const char* text, bool forceDump = false)
</span><span class="cx"> {
</span><span class="lines">@@ -195,7 +195,7 @@
</span><span class="cx">         if (isFTL(mode)) {
</span><span class="cx">             totalFTLCompileTime += after - before;
</span><span class="cx">             totalFTLDFGCompileTime += m_timeBeforeFTL - before;
</span><del>-            totalFTLLLVMCompileTime += after - m_timeBeforeFTL;
</del><ins>+            totalFTLB3CompileTime += after - m_timeBeforeFTL;
</ins><span class="cx">         } else
</span><span class="cx">             totalDFGCompileTime += after - before;
</span><span class="cx">     }
</span><span class="lines">@@ -463,12 +463,12 @@
</span><span class="cx">             return CancelPath;
</span><span class="cx"> 
</span><span class="cx">         FTL::State state(dfg);
</span><del>-        FTL::lowerDFGToLLVM(state);
</del><ins>+        FTL::lowerDFGToB3(state);
</ins><span class="cx">         
</span><span class="cx">         if (computeCompileTimes())
</span><span class="cx">             m_timeBeforeFTL = monotonicallyIncreasingTimeMS();
</span><span class="cx">         
</span><del>-        if (Options::llvmAlwaysFailsBeforeCompile()) {
</del><ins>+        if (Options::b3AlwaysFailsBeforeCompile()) {
</ins><span class="cx">             FTL::fail(state);
</span><span class="cx">             return FTLPath;
</span><span class="cx">         }
</span><span class="lines">@@ -477,7 +477,7 @@
</span><span class="cx">         if (safepointResult.didGetCancelled())
</span><span class="cx">             return CancelPath;
</span><span class="cx">         
</span><del>-        if (Options::llvmAlwaysFailsBeforeLink()) {
</del><ins>+        if (Options::b3AlwaysFailsBeforeLink()) {
</ins><span class="cx">             FTL::fail(state);
</span><span class="cx">             return FTLPath;
</span><span class="cx">         }
</span><span class="lines">@@ -666,7 +666,7 @@
</span><span class="cx">         result.add(&quot;DFG Compile Time&quot;, totalDFGCompileTime);
</span><span class="cx">         result.add(&quot;FTL Compile Time&quot;, totalFTLCompileTime);
</span><span class="cx">         result.add(&quot;FTL (DFG) Compile Time&quot;, totalFTLDFGCompileTime);
</span><del>-        result.add(&quot;FTL (B3) Compile Time&quot;, totalFTLLLVMCompileTime);
</del><ins>+        result.add(&quot;FTL (B3) Compile Time&quot;, totalFTLB3CompileTime);
</ins><span class="cx">     }
</span><span class="cx">     return result;
</span><span class="cx"> }
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGPutStackSinkingPhasecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -57,7 +57,7 @@
</span><span class="cx">         // for sunken PutStacks in the presence of interesting control flow merges, and where the
</span><span class="cx">         // value being PutStack'd is also otherwise live in the DFG code. We could work around this
</span><span class="cx">         // by doing the sinking over CPS, or maybe just by doing really smart hoisting. It's also
</span><del>-        // possible that the duplicate Phi graph can be deduplicated by LLVM. It would be best if we
</del><ins>+        // possible that the duplicate Phi graph can be deduplicated by B3. It would be best if we
</ins><span class="cx">         // could observe that there is already a Phi graph in place that does what we want. In
</span><span class="cx">         // principle if we have a request to place a Phi at a particular place, we could just check
</span><span class="cx">         // if there is already a Phi that does what we want. Because PutStackSinkingPhase runs just
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGSSAConversionPhaseh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -81,12 +81,6 @@
</span><span class="cx"> //   the caveat that the Phi predecessor block lists would have to be
</span><span class="cx"> //   updated).
</span><span class="cx"> //
</span><del>-//   The easiest way to convert from this SSA form into a different SSA
-//   form is to redo SSA conversion for Phi functions. That is, treat each
-//   Phi in our IR as a non-SSA variable in the foreign IR (so, as an
-//   alloca in LLVM IR, for example); the Upsilons that refer to the Phi
-//   become stores and the Phis themselves become loads.
-//
</del><span class="cx"> //   Fun fact: Upsilon is so named because it comes before Phi in the
</span><span class="cx"> //   alphabet. It can be written as &quot;Y&quot;.
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGStaticExecutionCountEstimationPhaseh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGStaticExecutionCountEstimationPhase.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGStaticExecutionCountEstimationPhase.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGStaticExecutionCountEstimationPhase.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -38,10 +38,6 @@
</span><span class="cx"> // ability to do accurate static estimations. Hence we lock in the estimates early.
</span><span class="cx"> // Ideally, we would have dynamic information, but we don't right now, so this is as
</span><span class="cx"> // good as it gets.
</span><del>-//
-// It's worth noting that if we didn't have this phase, then the static estimation
-// would be perfomed by LLVM instead. It's worth trying to make this phase perform
-// the estimates using the same heuristics that LLVM would use.
</del><span class="cx"> 
</span><span class="cx"> bool performStaticExecutionCountEstimation(Graph&amp;);
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGUnificationPhasecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGUnificationPhase.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGUnificationPhase.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/dfg/DFGUnificationPhase.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -60,10 +60,8 @@
</span><span class="cx">                     if (!phi-&gt;children.child(childIdx))
</span><span class="cx">                         break;
</span><span class="cx"> 
</span><del>-                    // We'd like to reverse the order of unification because it helps to reveal
-                    // more bugs on our end, though it's otherwise not observable. But we do it
-                    // this way because it works around a bug in open source LLVM's live-outs
-                    // computation.
</del><ins>+                    // FIXME: Consider reversing the order of this unification, since the other
+                    // order will reveal more bugs. https://bugs.webkit.org/show_bug.cgi?id=154368
</ins><span class="cx">                     phi-&gt;variableAccessData()-&gt;unify(phi-&gt;children.child(childIdx)-&gt;variableAccessData());
</span><span class="cx">                 }
</span><span class="cx">             }
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredisassemblerARM64Disassemblercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/disassembler/ARM64Disassembler.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/disassembler/ARM64Disassembler.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/disassembler/ARM64Disassembler.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -56,17 +56,3 @@
</span><span class="cx"> 
</span><span class="cx"> #endif // USE(ARM64_DISASSEMBLER)
</span><span class="cx"> 
</span><del>-#if USE(LLVM_DISASSEMBLER) &amp;&amp; CPU(ARM64)
-
-#include &quot;LLVMDisassembler.h&quot;
-
-namespace JSC {
-
-bool tryToDisassemble(const MacroAssemblerCodePtr&amp; codePtr, size_t size, const char* prefix, PrintStream&amp; out, InstructionSubsetHint hint)
-{
-    return tryToDisassembleWithLLVM(codePtr, size, prefix, out, hint);
-}
-
-} // namespace JSC
-
-#endif // USE(LLVM_DISASSEMBLER) &amp;&amp; CPU(ARM64)
</del></span></pre></div>
<a id="trunkSourceJavaScriptCoredisassemblerX86Disassemblercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/disassembler/X86Disassembler.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/disassembler/X86Disassembler.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/disassembler/X86Disassembler.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -27,7 +27,7 @@
</span><span class="cx"> #include &quot;Disassembler.h&quot;
</span><span class="cx"> 
</span><span class="cx"> #if ENABLE(DISASSEMBLER)
</span><del>-#if USE(UDIS86) || (USE(LLVM_DISASSEMBLER) &amp;&amp; (CPU(X86_64) || CPU(X86)))
</del><ins>+#if USE(UDIS86)
</ins><span class="cx"> 
</span><span class="cx"> #include &quot;MacroAssemblerCodeRef.h&quot;
</span><span class="cx"> #include &quot;Options.h&quot;
</span><span class="lines">@@ -35,13 +35,6 @@
</span><span class="cx"> 
</span><span class="cx"> namespace JSC {
</span><span class="cx"> 
</span><del>-// This horrifying monster is needed because neither of our disassemblers supports
-// all of x86, and using them together to disassemble the same instruction stream
-// would result in a fairly jarring print-out since they print in different
-// styles. Maybe we can do better in the future, but for now the caller hints
-// whether he's using the subset of the architecture that our MacroAssembler
-// supports (in which case we go with UDis86) or if he's using the LLVM subset.
-
</del><span class="cx"> bool tryToDisassemble(const MacroAssemblerCodePtr&amp; codePtr, size_t size, const char* prefix, PrintStream&amp; out)
</span><span class="cx"> {
</span><span class="cx">     return tryToDisassembleWithUDis86(codePtr, size, prefix, out);
</span><span class="lines">@@ -49,5 +42,5 @@
</span><span class="cx"> 
</span><span class="cx"> } // namespace JSC
</span><span class="cx"> 
</span><del>-#endif // USE(UDIS86) || USE(LLVM_DISASSEMBLER)
</del><ins>+#endif // USE(UDIS86)
</ins><span class="cx"> #endif // ENABLE(DISASSEMBLER)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLAbstractHeapcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -115,7 +115,9 @@
</span><span class="cx">     //
</span><span class="cx">     //    Blah_neg_A
</span><span class="cx">     //
</span><del>-    // This is important because LLVM uses the string to distinguish the types.
</del><ins>+    // This used to be important because we used to use the string to distinguish the types. This is
+    // not relevant anymore, and this code will be removed eventually.
+    // FIXME: https://bugs.webkit.org/show_bug.cgi?id=154319
</ins><span class="cx">     
</span><span class="cx">     static const char* negSplit = &quot;_neg_&quot;;
</span><span class="cx">     static const char* posSplit = &quot;_&quot;;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLAbstractHeaph"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLAbstractHeap.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -39,9 +39,9 @@
</span><span class="cx"> 
</span><span class="cx"> namespace JSC { namespace FTL {
</span><span class="cx"> 
</span><del>-// The FTL JIT tries to aid LLVM's TBAA. The FTL's notion of how this
-// happens is the AbstractHeap. AbstractHeaps are a simple type system
-// with sub-typing.
</del><ins>+// This is here because we used to generate LLVM's TBAA. In the future we will want to generate
+// B3 HeapRanges instead.
+// FIXME: https://bugs.webkit.org/show_bug.cgi?id=154319
</ins><span class="cx"> 
</span><span class="cx"> class AbstractHeapRepository;
</span><span class="cx"> class Output;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLFormattedValueh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLFormattedValue.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLFormattedValue.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLFormattedValue.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -35,10 +35,7 @@
</span><span class="cx"> 
</span><span class="cx"> // This class is mostly used for OSR; it's a way of specifying how a value is formatted
</span><span class="cx"> // in cases where it wouldn't have been obvious from looking at other indicators (like
</span><del>-// the type of the LLVMValueRef or the type of the DFG::Node). Typically this arises
-// because LLVMValueRef doesn't give us the granularity we need to begin with, and we
-// use this in situations where there is no good way to say what node the value came
-// from.
</del><ins>+// the type of the B3::Value* or the type of the DFG::Node).
</ins><span class="cx"> 
</span><span class="cx"> class FormattedValue {
</span><span class="cx"> public:
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLJITFinalizercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -78,7 +78,7 @@
</span><span class="cx">     jitCode-&gt;initializeArityCheckEntrypoint(
</span><span class="cx">         FINALIZE_CODE_IF(
</span><span class="cx">             dumpDisassembly, *entrypointLinkBuffer,
</span><del>-            (&quot;FTL entrypoint thunk for %s with LLVM generated code at %p&quot;, toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function)));
</del><ins>+            (&quot;FTL entrypoint thunk for %s with B3 generated code at %p&quot;, toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function)));
</ins><span class="cx">     
</span><span class="cx">     m_plan.codeBlock-&gt;setJITCode(jitCode);
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLinkcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLLink.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLink.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLLink.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -48,7 +48,7 @@
</span><span class="cx">     CodeBlock* codeBlock = graph.m_codeBlock;
</span><span class="cx">     VM&amp; vm = graph.m_vm;
</span><span class="cx">     
</span><del>-    // LLVM will create its own jump tables as needed.
</del><ins>+    // B3 will create its own jump tables as needed.
</ins><span class="cx">     codeBlock-&gt;clearSwitchJumpTables();
</span><span class="cx"> 
</span><span class="cx">     state.jitCode-&gt;common.requiredRegisterCountForExit = graph.requiredRegisterCountForExit();
</span><span class="lines">@@ -175,7 +175,7 @@
</span><span class="cx">         // We jump to here straight from DFG code, after having boxed up all of the
</span><span class="cx">         // values into the scratch buffer. Everything should be good to go - at this
</span><span class="cx">         // point we've even done the stack check. Basically we just have to make the
</span><del>-        // call to the LLVM-generated code.
</del><ins>+        // call to the B3-generated code.
</ins><span class="cx">         CCallHelpers::Label start = jit.label();
</span><span class="cx">         jit.emitFunctionEpilogue();
</span><span class="cx">         CCallHelpers::Jump mainPathJump = jit.jump();
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLocationcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLLocation.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLocation.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLLocation.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -130,7 +130,7 @@
</span><span class="cx">     
</span><span class="cx">     switch (kind()) {
</span><span class="cx">     case Register:
</span><del>-        // LLVM used some register that we don't know about!
</del><ins>+        // B3 used some register that we don't know about!
</ins><span class="cx">         dataLog(&quot;Unrecognized location: &quot;, *this, &quot;\n&quot;);
</span><span class="cx">         RELEASE_ASSERT_NOT_REACHED();
</span><span class="cx">         return;
</span><span class="lines">@@ -151,8 +151,6 @@
</span><span class="cx">         return;
</span><span class="cx">         
</span><span class="cx">     case Unprocessed:
</span><del>-        // Should never see this - it's an enumeration entry on LLVM's side that means that
-        // it hasn't processed this location.
</del><span class="cx">         RELEASE_ASSERT_NOT_REACHED();
</span><span class="cx">         return;
</span><span class="cx">     }
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLowerDFGToB3cppfromrev196731trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMcpp"></a>
<div class="copfile"><h4>Copied: trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp (from rev 196731, trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp) (0 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp                                (rev 0)
+++ trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -0,0 +1,10459 @@
</span><ins>+/*
+ * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include &quot;config.h&quot;
+#include &quot;FTLLowerDFGToB3.h&quot;
+
+#if ENABLE(FTL_JIT)
+
+#include &quot;AirGenerationContext.h&quot;
+#include &quot;AllowMacroScratchRegisterUsage.h&quot;
+#include &quot;B3StackmapGenerationParams.h&quot;
+#include &quot;CallFrameShuffler.h&quot;
+#include &quot;CodeBlockWithJITType.h&quot;
+#include &quot;DFGAbstractInterpreterInlines.h&quot;
+#include &quot;DFGDominators.h&quot;
+#include &quot;DFGInPlaceAbstractState.h&quot;
+#include &quot;DFGOSRAvailabilityAnalysisPhase.h&quot;
+#include &quot;DFGOSRExitFuzz.h&quot;
+#include &quot;DirectArguments.h&quot;
+#include &quot;FTLAbstractHeapRepository.h&quot;
+#include &quot;FTLAvailableRecovery.h&quot;
+#include &quot;FTLExceptionTarget.h&quot;
+#include &quot;FTLForOSREntryJITCode.h&quot;
+#include &quot;FTLFormattedValue.h&quot;
+#include &quot;FTLLazySlowPathCall.h&quot;
+#include &quot;FTLLoweredNodeValue.h&quot;
+#include &quot;FTLOperations.h&quot;
+#include &quot;FTLOutput.h&quot;
+#include &quot;FTLPatchpointExceptionHandle.h&quot;
+#include &quot;FTLThunks.h&quot;
+#include &quot;FTLWeightedTarget.h&quot;
+#include &quot;JITAddGenerator.h&quot;
+#include &quot;JITBitAndGenerator.h&quot;
+#include &quot;JITBitOrGenerator.h&quot;
+#include &quot;JITBitXorGenerator.h&quot;
+#include &quot;JITDivGenerator.h&quot;
+#include &quot;JITInlineCacheGenerator.h&quot;
+#include &quot;JITLeftShiftGenerator.h&quot;
+#include &quot;JITMulGenerator.h&quot;
+#include &quot;JITRightShiftGenerator.h&quot;
+#include &quot;JITSubGenerator.h&quot;
+#include &quot;JSCInlines.h&quot;
+#include &quot;JSGeneratorFunction.h&quot;
+#include &quot;JSLexicalEnvironment.h&quot;
+#include &quot;OperandsInlines.h&quot;
+#include &quot;ScopedArguments.h&quot;
+#include &quot;ScopedArgumentsTable.h&quot;
+#include &quot;ScratchRegisterAllocator.h&quot;
+#include &quot;SetupVarargsFrame.h&quot;
+#include &quot;VirtualRegister.h&quot;
+#include &quot;Watchdog.h&quot;
+#include &lt;atomic&gt;
+#if !OS(WINDOWS)
+#include &lt;dlfcn.h&gt;
+#endif
+#include &lt;unordered_set&gt;
+#include &lt;wtf/Box.h&gt;
+#include &lt;wtf/ProcessID.h&gt;
+
+namespace JSC { namespace FTL {
+
+using namespace B3;
+using namespace DFG;
+
+namespace {
+
+std::atomic&lt;int&gt; compileCounter;
+
+#if ASSERT_DISABLED
+NO_RETURN_DUE_TO_CRASH static void ftlUnreachable()
+{
+    CRASH();
+}
+#else
+NO_RETURN_DUE_TO_CRASH static void ftlUnreachable(
+    CodeBlock* codeBlock, BlockIndex blockIndex, unsigned nodeIndex)
+{
+    dataLog(&quot;Crashing in thought-to-be-unreachable FTL-generated code for &quot;, pointerDump(codeBlock), &quot; at basic block #&quot;, blockIndex);
+    if (nodeIndex != UINT_MAX)
+        dataLog(&quot;, node @&quot;, nodeIndex);
+    dataLog(&quot;.\n&quot;);
+    CRASH();
+}
+#endif
+
+// Using this instead of typeCheck() helps to reduce the load on B3, by creating
+// significantly less dead code.
+#define FTL_TYPE_CHECK_WITH_EXIT_KIND(exitKind, lowValue, highValue, typesPassedThrough, failCondition) do { \
+        FormattedValue _ftc_lowValue = (lowValue);                      \
+        Edge _ftc_highValue = (highValue);                              \
+        SpeculatedType _ftc_typesPassedThrough = (typesPassedThrough);  \
+        if (!m_interpreter.needsTypeCheck(_ftc_highValue, _ftc_typesPassedThrough)) \
+            break;                                                      \
+        typeCheck(_ftc_lowValue, _ftc_highValue, _ftc_typesPassedThrough, (failCondition), exitKind); \
+    } while (false)
+
+#define FTL_TYPE_CHECK(lowValue, highValue, typesPassedThrough, failCondition) \
+    FTL_TYPE_CHECK_WITH_EXIT_KIND(BadType, lowValue, highValue, typesPassedThrough, failCondition)
+
+class LowerDFGToB3 {
+    WTF_MAKE_NONCOPYABLE(LowerDFGToB3);
+public:
+    LowerDFGToB3(State&amp; state)
+        : m_graph(state.graph)
+        , m_ftlState(state)
+        , m_out(state)
+        , m_proc(*state.proc)
+        , m_state(state.graph)
+        , m_interpreter(state.graph, m_state)
+    {
+    }
+    
+    void lower()
+    {
+        State* state = &amp;m_ftlState;
+        
+        CString name;
+        if (verboseCompilationEnabled()) {
+            name = toCString(
+                &quot;jsBody_&quot;, ++compileCounter, &quot;_&quot;, codeBlock()-&gt;inferredName(),
+                &quot;_&quot;, codeBlock()-&gt;hash());
+        } else
+            name = &quot;jsBody&quot;;
+        
+        m_graph.ensureDominators();
+
+        if (verboseCompilationEnabled())
+            dataLog(&quot;Function ready, beginning lowering.\n&quot;);
+
+        m_out.initialize(m_heaps);
+
+        // We use prologue frequency for all of the initialization code.
+        m_out.setFrequency(1);
+        
+        m_prologue = FTL_NEW_BLOCK(m_out, (&quot;Prologue&quot;));
+        LBasicBlock stackOverflow = FTL_NEW_BLOCK(m_out, (&quot;Stack overflow&quot;));
+        m_handleExceptions = FTL_NEW_BLOCK(m_out, (&quot;Handle Exceptions&quot;));
+        
+        LBasicBlock checkArguments = FTL_NEW_BLOCK(m_out, (&quot;Check arguments&quot;));
+
+        for (BlockIndex blockIndex = 0; blockIndex &lt; m_graph.numBlocks(); ++blockIndex) {
+            m_highBlock = m_graph.block(blockIndex);
+            if (!m_highBlock)
+                continue;
+            m_out.setFrequency(m_highBlock-&gt;executionCount);
+            m_blocks.add(m_highBlock, FTL_NEW_BLOCK(m_out, (&quot;Block &quot;, *m_highBlock)));
+        }
+
+        // Back to prologue frequency for any bocks that get sneakily created in the initialization code.
+        m_out.setFrequency(1);
+        
+        m_out.appendTo(m_prologue, stackOverflow);
+        m_out.initializeConstants(m_proc, m_prologue);
+        createPhiVariables();
+
+        size_t sizeOfCaptured = sizeof(JSValue) * m_graph.m_nextMachineLocal;
+        B3::SlotBaseValue* capturedBase = m_out.lockedStackSlot(sizeOfCaptured);
+        m_captured = m_out.add(capturedBase, m_out.constIntPtr(sizeOfCaptured));
+        state-&gt;capturedValue = capturedBase-&gt;slot();
+
+        auto preOrder = m_graph.blocksInPreOrder();
+
+        // We should not create any alloca's after this point, since they will cease to
+        // be mem2reg candidates.
+        
+        m_callFrame = m_out.framePointer();
+        m_tagTypeNumber = m_out.constInt64(TagTypeNumber);
+        m_tagMask = m_out.constInt64(TagMask);
+
+        // Make sure that B3 knows that we really care about the mask registers. This forces the
+        // constants to be materialized in registers.
+        m_proc.addFastConstant(m_tagTypeNumber-&gt;key());
+        m_proc.addFastConstant(m_tagMask-&gt;key());
+        
+        m_out.storePtr(m_out.constIntPtr(codeBlock()), addressFor(JSStack::CodeBlock));
+        
+        m_out.branch(
+            didOverflowStack(), rarely(stackOverflow), usually(checkArguments));
+        
+        m_out.appendTo(stackOverflow, m_handleExceptions);
+        m_out.call(m_out.voidType, m_out.operation(operationThrowStackOverflowError), m_callFrame, m_out.constIntPtr(codeBlock()));
+        m_out.patchpoint(Void)-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp;) {
+                // We are terminal, so we can clobber everything. That's why we don't claim to
+                // clobber scratch.
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                
+                jit.copyCalleeSavesToVMCalleeSavesBuffer();
+                jit.move(CCallHelpers::TrustedImmPtr(jit.vm()), GPRInfo::argumentGPR0);
+                jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
+                CCallHelpers::Call call = jit.call();
+                jit.jumpToExceptionHandler();
+
+                jit.addLinkTask(
+                    [=] (LinkBuffer&amp; linkBuffer) {
+                        linkBuffer.link(call, FunctionPtr(lookupExceptionHandlerFromCallerFrame));
+                    });
+            });
+        m_out.unreachable();
+        
+        m_out.appendTo(m_handleExceptions, checkArguments);
+        Box&lt;CCallHelpers::Label&gt; exceptionHandler = state-&gt;exceptionHandler;
+        m_out.patchpoint(Void)-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp;) {
+                CCallHelpers::Jump jump = jit.jump();
+                jit.addLinkTask(
+                    [=] (LinkBuffer&amp; linkBuffer) {
+                        linkBuffer.link(jump, linkBuffer.locationOf(*exceptionHandler));
+                    });
+            });
+        m_out.unreachable();
+        
+        m_out.appendTo(checkArguments, lowBlock(m_graph.block(0)));
+        availabilityMap().clear();
+        availabilityMap().m_locals = Operands&lt;Availability&gt;(codeBlock()-&gt;numParameters(), 0);
+        for (unsigned i = codeBlock()-&gt;numParameters(); i--;) {
+            availabilityMap().m_locals.argument(i) =
+                Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgument(i)));
+        }
+        m_node = nullptr;
+        m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
+        for (unsigned i = codeBlock()-&gt;numParameters(); i--;) {
+            Node* node = m_graph.m_arguments[i];
+            VirtualRegister operand = virtualRegisterForArgument(i);
+            
+            LValue jsValue = m_out.load64(addressFor(operand));
+            
+            if (node) {
+                DFG_ASSERT(m_graph, node, operand == node-&gt;stackAccessData()-&gt;machineLocal);
+                
+                // This is a hack, but it's an effective one. It allows us to do CSE on the
+                // primordial load of arguments. This assumes that the GetLocal that got put in
+                // place of the original SetArgument doesn't have any effects before it. This
+                // should hold true.
+                m_loadedArgumentValues.add(node, jsValue);
+            }
+            
+            switch (m_graph.m_argumentFormats[i]) {
+            case FlushedInt32:
+                speculate(BadType, jsValueValue(jsValue), node, isNotInt32(jsValue));
+                break;
+            case FlushedBoolean:
+                speculate(BadType, jsValueValue(jsValue), node, isNotBoolean(jsValue));
+                break;
+            case FlushedCell:
+                speculate(BadType, jsValueValue(jsValue), node, isNotCell(jsValue));
+                break;
+            case FlushedJSValue:
+                break;
+            default:
+                DFG_CRASH(m_graph, node, &quot;Bad flush format for argument&quot;);
+                break;
+            }
+        }
+        m_out.jump(lowBlock(m_graph.block(0)));
+        
+        for (DFG::BasicBlock* block : preOrder)
+            compileBlock(block);
+
+        // We create all Phi's up front, but we may then decide not to compile the basic block
+        // that would have contained one of them. So this creates orphans, which triggers B3
+        // validation failures. Calling this fixes the issue.
+        //
+        // Note that you should avoid the temptation to make this call conditional upon
+        // validation being enabled. B3 makes no guarantees of any kind of correctness when
+        // dealing with IR that would have failed validation. For example, it would be valid to
+        // write a B3 phase that so aggressively assumes the lack of orphans that it would crash
+        // if any orphans were around. We might even have such phases already.
+        m_proc.deleteOrphans();
+
+        // We put the blocks into the B3 procedure in a super weird order. Now we reorder them.
+        m_out.applyBlockOrder();
+    }
+
+private:
+    
+    void createPhiVariables()
+    {
+        for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
+            DFG::BasicBlock* block = m_graph.block(blockIndex);
+            if (!block)
+                continue;
+            for (unsigned nodeIndex = block-&gt;size(); nodeIndex--;) {
+                Node* node = block-&gt;at(nodeIndex);
+                if (node-&gt;op() != DFG::Phi)
+                    continue;
+                LType type;
+                switch (node-&gt;flags() &amp; NodeResultMask) {
+                case NodeResultDouble:
+                    type = m_out.doubleType;
+                    break;
+                case NodeResultInt32:
+                    type = m_out.int32;
+                    break;
+                case NodeResultInt52:
+                    type = m_out.int64;
+                    break;
+                case NodeResultBoolean:
+                    type = m_out.boolean;
+                    break;
+                case NodeResultJS:
+                    type = m_out.int64;
+                    break;
+                default:
+                    DFG_CRASH(m_graph, node, &quot;Bad Phi node result type&quot;);
+                    break;
+                }
+                m_phis.add(node, m_proc.add&lt;Value&gt;(B3::Phi, type, Origin(node)));
+            }
+        }
+    }
+    
+    void compileBlock(DFG::BasicBlock* block)
+    {
+        if (!block)
+            return;
+        
+        if (verboseCompilationEnabled())
+            dataLog(&quot;Compiling block &quot;, *block, &quot;\n&quot;);
+        
+        m_highBlock = block;
+        
+        // Make sure that any blocks created while lowering code in the high block have the frequency of
+        // the high block. This is appropriate because B3 doesn't need precise frequencies. It just needs
+        // something roughly approximate for things like register allocation.
+        m_out.setFrequency(m_highBlock-&gt;executionCount);
+        
+        LBasicBlock lowBlock = m_blocks.get(m_highBlock);
+        
+        m_nextHighBlock = 0;
+        for (BlockIndex nextBlockIndex = m_highBlock-&gt;index + 1; nextBlockIndex &lt; m_graph.numBlocks(); ++nextBlockIndex) {
+            m_nextHighBlock = m_graph.block(nextBlockIndex);
+            if (m_nextHighBlock)
+                break;
+        }
+        m_nextLowBlock = m_nextHighBlock ? m_blocks.get(m_nextHighBlock) : 0;
+        
+        // All of this effort to find the next block gives us the ability to keep the
+        // generated IR in roughly program order. This ought not affect the performance
+        // of the generated code (since we expect B3 to reorder things) but it will
+        // make IR dumps easier to read.
+        m_out.appendTo(lowBlock, m_nextLowBlock);
+        
+        if (Options::ftlCrashes())
+            m_out.trap();
+        
+        if (!m_highBlock-&gt;cfaHasVisited) {
+            if (verboseCompilationEnabled())
+                dataLog(&quot;Bailing because CFA didn't reach.\n&quot;);
+            crash(m_highBlock-&gt;index, UINT_MAX);
+            return;
+        }
+        
+        m_availabilityCalculator.beginBlock(m_highBlock);
+        
+        m_state.reset();
+        m_state.beginBasicBlock(m_highBlock);
+        
+        for (m_nodeIndex = 0; m_nodeIndex &lt; m_highBlock-&gt;size(); ++m_nodeIndex) {
+            if (!compileNode(m_nodeIndex))
+                break;
+        }
+    }
+
+    void safelyInvalidateAfterTermination()
+    {
+        if (verboseCompilationEnabled())
+            dataLog(&quot;Bailing.\n&quot;);
+        crash();
+
+        // Invalidate dominated blocks. Under normal circumstances we would expect
+        // them to be invalidated already. But you can have the CFA become more
+        // precise over time because the structures of objects change on the main
+        // thread. Failing to do this would result in weird crashes due to a value
+        // being used but not defined. Race conditions FTW!
+        for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
+            DFG::BasicBlock* target = m_graph.block(blockIndex);
+            if (!target)
+                continue;
+            if (m_graph.m_dominators-&gt;dominates(m_highBlock, target)) {
+                if (verboseCompilationEnabled())
+                    dataLog(&quot;Block &quot;, *target, &quot; will bail also.\n&quot;);
+                target-&gt;cfaHasVisited = false;
+            }
+        }
+    }
+
+    bool compileNode(unsigned nodeIndex)
+    {
+        if (!m_state.isValid()) {
+            safelyInvalidateAfterTermination();
+            return false;
+        }
+        
+        m_node = m_highBlock-&gt;at(nodeIndex);
+        m_origin = m_node-&gt;origin;
+        m_out.setOrigin(m_node);
+        
+        if (verboseCompilationEnabled())
+            dataLog(&quot;Lowering &quot;, m_node, &quot;\n&quot;);
+        
+        m_availableRecoveries.resize(0);
+        
+        m_interpreter.startExecuting();
+        
+        switch (m_node-&gt;op()) {
+        case DFG::Upsilon:
+            compileUpsilon();
+            break;
+        case DFG::Phi:
+            compilePhi();
+            break;
+        case JSConstant:
+            break;
+        case DoubleConstant:
+            compileDoubleConstant();
+            break;
+        case Int52Constant:
+            compileInt52Constant();
+            break;
+        case DoubleRep:
+            compileDoubleRep();
+            break;
+        case DoubleAsInt32:
+            compileDoubleAsInt32();
+            break;
+        case DFG::ValueRep:
+            compileValueRep();
+            break;
+        case Int52Rep:
+            compileInt52Rep();
+            break;
+        case ValueToInt32:
+            compileValueToInt32();
+            break;
+        case BooleanToNumber:
+            compileBooleanToNumber();
+            break;
+        case ExtractOSREntryLocal:
+            compileExtractOSREntryLocal();
+            break;
+        case GetStack:
+            compileGetStack();
+            break;
+        case PutStack:
+            compilePutStack();
+            break;
+        case DFG::Check:
+            compileNoOp();
+            break;
+        case ToThis:
+            compileToThis();
+            break;
+        case ValueAdd:
+            compileValueAdd();
+            break;
+        case StrCat:
+            compileStrCat();
+            break;
+        case ArithAdd:
+        case ArithSub:
+            compileArithAddOrSub();
+            break;
+        case ArithClz32:
+            compileArithClz32();
+            break;
+        case ArithMul:
+            compileArithMul();
+            break;
+        case ArithDiv:
+            compileArithDiv();
+            break;
+        case ArithMod:
+            compileArithMod();
+            break;
+        case ArithMin:
+        case ArithMax:
+            compileArithMinOrMax();
+            break;
+        case ArithAbs:
+            compileArithAbs();
+            break;
+        case ArithSin:
+            compileArithSin();
+            break;
+        case ArithCos:
+            compileArithCos();
+            break;
+        case ArithPow:
+            compileArithPow();
+            break;
+        case ArithRandom:
+            compileArithRandom();
+            break;
+        case ArithRound:
+            compileArithRound();
+            break;
+        case ArithSqrt:
+            compileArithSqrt();
+            break;
+        case ArithLog:
+            compileArithLog();
+            break;
+        case ArithFRound:
+            compileArithFRound();
+            break;
+        case ArithNegate:
+            compileArithNegate();
+            break;
+        case DFG::BitAnd:
+            compileBitAnd();
+            break;
+        case DFG::BitOr:
+            compileBitOr();
+            break;
+        case DFG::BitXor:
+            compileBitXor();
+            break;
+        case BitRShift:
+            compileBitRShift();
+            break;
+        case BitLShift:
+            compileBitLShift();
+            break;
+        case BitURShift:
+            compileBitURShift();
+            break;
+        case UInt32ToNumber:
+            compileUInt32ToNumber();
+            break;
+        case CheckStructure:
+            compileCheckStructure();
+            break;
+        case CheckCell:
+            compileCheckCell();
+            break;
+        case CheckNotEmpty:
+            compileCheckNotEmpty();
+            break;
+        case CheckBadCell:
+            compileCheckBadCell();
+            break;
+        case CheckIdent:
+            compileCheckIdent();
+            break;
+        case GetExecutable:
+            compileGetExecutable();
+            break;
+        case ArrayifyToStructure:
+            compileArrayifyToStructure();
+            break;
+        case PutStructure:
+            compilePutStructure();
+            break;
+        case GetById:
+        case GetByIdFlush:
+            compileGetById();
+            break;
+        case In:
+            compileIn();
+            break;
+        case PutById:
+        case PutByIdDirect:
+        case PutByIdFlush:
+            compilePutById();
+            break;
+        case PutGetterById:
+        case PutSetterById:
+            compilePutAccessorById();
+            break;
+        case PutGetterSetterById:
+            compilePutGetterSetterById();
+            break;
+        case PutGetterByVal:
+        case PutSetterByVal:
+            compilePutAccessorByVal();
+            break;
+        case GetButterfly:
+            compileGetButterfly();
+            break;
+        case GetButterflyReadOnly:
+            compileGetButterflyReadOnly();
+            break;
+        case ConstantStoragePointer:
+            compileConstantStoragePointer();
+            break;
+        case GetIndexedPropertyStorage:
+            compileGetIndexedPropertyStorage();
+            break;
+        case CheckArray:
+            compileCheckArray();
+            break;
+        case GetArrayLength:
+            compileGetArrayLength();
+            break;
+        case CheckInBounds:
+            compileCheckInBounds();
+            break;
+        case GetByVal:
+            compileGetByVal();
+            break;
+        case GetMyArgumentByVal:
+            compileGetMyArgumentByVal();
+            break;
+        case PutByVal:
+        case PutByValAlias:
+        case PutByValDirect:
+            compilePutByVal();
+            break;
+        case ArrayPush:
+            compileArrayPush();
+            break;
+        case ArrayPop:
+            compileArrayPop();
+            break;
+        case CreateActivation:
+            compileCreateActivation();
+            break;
+        case NewFunction:
+        case NewArrowFunction:
+        case NewGeneratorFunction:
+            compileNewFunction();
+            break;
+        case CreateDirectArguments:
+            compileCreateDirectArguments();
+            break;
+        case CreateScopedArguments:
+            compileCreateScopedArguments();
+            break;
+        case CreateClonedArguments:
+            compileCreateClonedArguments();
+            break;
+        case NewObject:
+            compileNewObject();
+            break;
+        case NewArray:
+            compileNewArray();
+            break;
+        case NewArrayBuffer:
+            compileNewArrayBuffer();
+            break;
+        case NewArrayWithSize:
+            compileNewArrayWithSize();
+            break;
+        case NewTypedArray:
+            compileNewTypedArray();
+            break;
+        case GetTypedArrayByteOffset:
+            compileGetTypedArrayByteOffset();
+            break;
+        case AllocatePropertyStorage:
+            compileAllocatePropertyStorage();
+            break;
+        case ReallocatePropertyStorage:
+            compileReallocatePropertyStorage();
+            break;
+        case ToString:
+        case CallStringConstructor:
+            compileToStringOrCallStringConstructor();
+            break;
+        case ToPrimitive:
+            compileToPrimitive();
+            break;
+        case MakeRope:
+            compileMakeRope();
+            break;
+        case StringCharAt:
+            compileStringCharAt();
+            break;
+        case StringCharCodeAt:
+            compileStringCharCodeAt();
+            break;
+        case StringFromCharCode:
+            compileStringFromCharCode();
+            break;
+        case GetByOffset:
+        case GetGetterSetterByOffset:
+            compileGetByOffset();
+            break;
+        case GetGetter:
+            compileGetGetter();
+            break;
+        case GetSetter:
+            compileGetSetter();
+            break;
+        case MultiGetByOffset:
+            compileMultiGetByOffset();
+            break;
+        case PutByOffset:
+            compilePutByOffset();
+            break;
+        case MultiPutByOffset:
+            compileMultiPutByOffset();
+            break;
+        case GetGlobalVar:
+        case GetGlobalLexicalVariable:
+            compileGetGlobalVariable();
+            break;
+        case PutGlobalVariable:
+            compilePutGlobalVariable();
+            break;
+        case NotifyWrite:
+            compileNotifyWrite();
+            break;
+        case GetCallee:
+            compileGetCallee();
+            break;
+        case GetArgumentCount:
+            compileGetArgumentCount();
+            break;
+        case GetScope:
+            compileGetScope();
+            break;
+        case SkipScope:
+            compileSkipScope();
+            break;
+        case GetClosureVar:
+            compileGetClosureVar();
+            break;
+        case PutClosureVar:
+            compilePutClosureVar();
+            break;
+        case GetFromArguments:
+            compileGetFromArguments();
+            break;
+        case PutToArguments:
+            compilePutToArguments();
+            break;
+        case CompareEq:
+            compileCompareEq();
+            break;
+        case CompareStrictEq:
+            compileCompareStrictEq();
+            break;
+        case CompareLess:
+            compileCompareLess();
+            break;
+        case CompareLessEq:
+            compileCompareLessEq();
+            break;
+        case CompareGreater:
+            compileCompareGreater();
+            break;
+        case CompareGreaterEq:
+            compileCompareGreaterEq();
+            break;
+        case LogicalNot:
+            compileLogicalNot();
+            break;
+        case Call:
+        case TailCallInlinedCaller:
+        case Construct:
+            compileCallOrConstruct();
+            break;
+        case TailCall:
+            compileTailCall();
+            break;
+        case CallVarargs:
+        case CallForwardVarargs:
+        case TailCallVarargs:
+        case TailCallVarargsInlinedCaller:
+        case TailCallForwardVarargs:
+        case TailCallForwardVarargsInlinedCaller:
+        case ConstructVarargs:
+        case ConstructForwardVarargs:
+            compileCallOrConstructVarargs();
+            break;
+        case LoadVarargs:
+            compileLoadVarargs();
+            break;
+        case ForwardVarargs:
+            compileForwardVarargs();
+            break;
+        case DFG::Jump:
+            compileJump();
+            break;
+        case DFG::Branch:
+            compileBranch();
+            break;
+        case DFG::Switch:
+            compileSwitch();
+            break;
+        case DFG::Return:
+            compileReturn();
+            break;
+        case ForceOSRExit:
+            compileForceOSRExit();
+            break;
+        case Throw:
+        case ThrowReferenceError:
+            compileThrow();
+            break;
+        case InvalidationPoint:
+            compileInvalidationPoint();
+            break;
+        case IsUndefined:
+            compileIsUndefined();
+            break;
+        case IsBoolean:
+            compileIsBoolean();
+            break;
+        case IsNumber:
+            compileIsNumber();
+            break;
+        case IsString:
+            compileIsString();
+            break;
+        case IsObject:
+            compileIsObject();
+            break;
+        case IsObjectOrNull:
+            compileIsObjectOrNull();
+            break;
+        case IsFunction:
+            compileIsFunction();
+            break;
+        case TypeOf:
+            compileTypeOf();
+            break;
+        case CheckTypeInfoFlags:
+            compileCheckTypeInfoFlags();
+            break;
+        case OverridesHasInstance:
+            compileOverridesHasInstance();
+            break;
+        case InstanceOf:
+            compileInstanceOf();
+            break;
+        case InstanceOfCustom:
+            compileInstanceOfCustom();
+            break;
+        case CountExecution:
+            compileCountExecution();
+            break;
+        case StoreBarrier:
+            compileStoreBarrier();
+            break;
+        case HasIndexedProperty:
+            compileHasIndexedProperty();
+            break;
+        case HasGenericProperty:
+            compileHasGenericProperty();
+            break;
+        case HasStructureProperty:
+            compileHasStructureProperty();
+            break;
+        case GetDirectPname:
+            compileGetDirectPname();
+            break;
+        case GetEnumerableLength:
+            compileGetEnumerableLength();
+            break;
+        case GetPropertyEnumerator:
+            compileGetPropertyEnumerator();
+            break;
+        case GetEnumeratorStructurePname:
+            compileGetEnumeratorStructurePname();
+            break;
+        case GetEnumeratorGenericPname:
+            compileGetEnumeratorGenericPname();
+            break;
+        case ToIndexString:
+            compileToIndexString();
+            break;
+        case CheckStructureImmediate:
+            compileCheckStructureImmediate();
+            break;
+        case MaterializeNewObject:
+            compileMaterializeNewObject();
+            break;
+        case MaterializeCreateActivation:
+            compileMaterializeCreateActivation();
+            break;
+        case CheckWatchdogTimer:
+            compileCheckWatchdogTimer();
+            break;
+        case CopyRest:
+            compileCopyRest();
+            break;
+        case GetRestLength:
+            compileGetRestLength();
+            break;
+
+        case PhantomLocal:
+        case LoopHint:
+        case MovHint:
+        case ZombieHint:
+        case ExitOK:
+        case PhantomNewObject:
+        case PhantomNewFunction:
+        case PhantomNewGeneratorFunction:
+        case PhantomCreateActivation:
+        case PhantomDirectArguments:
+        case PhantomClonedArguments:
+        case PutHint:
+        case BottomValue:
+        case KillStack:
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Unrecognized node in FTL backend&quot;);
+            break;
+        }
+        
+        if (m_node-&gt;isTerminal())
+            return false;
+        
+        if (!m_state.isValid()) {
+            safelyInvalidateAfterTermination();
+            return false;
+        }
+
+        m_availabilityCalculator.executeNode(m_node);
+        m_interpreter.executeEffects(nodeIndex);
+        
+        return true;
+    }
+
+    void compileUpsilon()
+    {
+        LValue upsilonValue = nullptr;
+        switch (m_node-&gt;child1().useKind()) {
+        case DoubleRepUse:
+            upsilonValue = lowDouble(m_node-&gt;child1());
+            break;
+        case Int32Use:
+        case KnownInt32Use:
+            upsilonValue = lowInt32(m_node-&gt;child1());
+            break;
+        case Int52RepUse:
+            upsilonValue = lowInt52(m_node-&gt;child1());
+            break;
+        case BooleanUse:
+        case KnownBooleanUse:
+            upsilonValue = lowBoolean(m_node-&gt;child1());
+            break;
+        case CellUse:
+        case KnownCellUse:
+            upsilonValue = lowCell(m_node-&gt;child1());
+            break;
+        case UntypedUse:
+            upsilonValue = lowJSValue(m_node-&gt;child1());
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+        ValueFromBlock upsilon = m_out.anchor(upsilonValue);
+        LValue phiNode = m_phis.get(m_node-&gt;phi());
+        m_out.addIncomingToPhi(phiNode, upsilon);
+    }
+    
+    void compilePhi()
+    {
+        LValue phi = m_phis.get(m_node);
+        m_out.m_block-&gt;append(phi);
+
+        switch (m_node-&gt;flags() &amp; NodeResultMask) {
+        case NodeResultDouble:
+            setDouble(phi);
+            break;
+        case NodeResultInt32:
+            setInt32(phi);
+            break;
+        case NodeResultInt52:
+            setInt52(phi);
+            break;
+        case NodeResultBoolean:
+            setBoolean(phi);
+            break;
+        case NodeResultJS:
+            setJSValue(phi);
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+    
+    void compileDoubleConstant()
+    {
+        setDouble(m_out.constDouble(m_node-&gt;asNumber()));
+    }
+    
+    void compileInt52Constant()
+    {
+        int64_t value = m_node-&gt;asMachineInt();
+        
+        setInt52(m_out.constInt64(value &lt;&lt; JSValue::int52ShiftAmount));
+        setStrictInt52(m_out.constInt64(value));
+    }
+
+    void compileDoubleRep()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case RealNumberUse: {
+            LValue value = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
+            
+            LValue doubleValue = unboxDouble(value);
+            
+            LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;DoubleRep RealNumberUse int case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;DoubleRep continuation&quot;));
+            
+            ValueFromBlock fastResult = m_out.anchor(doubleValue);
+            m_out.branch(
+                m_out.doubleEqual(doubleValue, doubleValue),
+                usually(continuation), rarely(intCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(intCase, continuation);
+            
+            FTL_TYPE_CHECK(
+                jsValueValue(value), m_node-&gt;child1(), SpecBytecodeRealNumber,
+                isNotInt32(value, provenType(m_node-&gt;child1()) &amp; ~SpecFullDouble));
+            ValueFromBlock slowResult = m_out.anchor(m_out.intToDouble(unboxInt32(value)));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            
+            setDouble(m_out.phi(m_out.doubleType, fastResult, slowResult));
+            return;
+        }
+            
+        case NotCellUse:
+        case NumberUse: {
+            bool shouldConvertNonNumber = m_node-&gt;child1().useKind() == NotCellUse;
+            
+            LValue value = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
+
+            LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble unboxing int case&quot;));
+            LBasicBlock doubleTesting = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing double case&quot;));
+            LBasicBlock doubleCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble unboxing double case&quot;));
+            LBasicBlock nonDoubleCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing undefined case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble unboxing continuation&quot;));
+            
+            m_out.branch(
+                isNotInt32(value, provenType(m_node-&gt;child1())),
+                unsure(doubleTesting), unsure(intCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(intCase, doubleTesting);
+            
+            ValueFromBlock intToDouble = m_out.anchor(
+                m_out.intToDouble(unboxInt32(value)));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(doubleTesting, doubleCase);
+            LValue valueIsNumber = isNumber(value, provenType(m_node-&gt;child1()));
+            m_out.branch(valueIsNumber, usually(doubleCase), rarely(nonDoubleCase));
+
+            m_out.appendTo(doubleCase, nonDoubleCase);
+            ValueFromBlock unboxedDouble = m_out.anchor(unboxDouble(value));
+            m_out.jump(continuation);
+
+            if (shouldConvertNonNumber) {
+                LBasicBlock undefinedCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble converting undefined case&quot;));
+                LBasicBlock testNullCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing null case&quot;));
+                LBasicBlock nullCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble converting null case&quot;));
+                LBasicBlock testBooleanTrueCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing boolean true case&quot;));
+                LBasicBlock convertBooleanTrueCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble convert boolean true case&quot;));
+                LBasicBlock convertBooleanFalseCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble convert boolean false case&quot;));
+
+                m_out.appendTo(nonDoubleCase, undefinedCase);
+                LValue valueIsUndefined = m_out.equal(value, m_out.constInt64(ValueUndefined));
+                m_out.branch(valueIsUndefined, unsure(undefinedCase), unsure(testNullCase));
+
+                m_out.appendTo(undefinedCase, testNullCase);
+                ValueFromBlock convertedUndefined = m_out.anchor(m_out.constDouble(PNaN));
+                m_out.jump(continuation);
+
+                m_out.appendTo(testNullCase, nullCase);
+                LValue valueIsNull = m_out.equal(value, m_out.constInt64(ValueNull));
+                m_out.branch(valueIsNull, unsure(nullCase), unsure(testBooleanTrueCase));
+
+                m_out.appendTo(nullCase, testBooleanTrueCase);
+                ValueFromBlock convertedNull = m_out.anchor(m_out.constDouble(0));
+                m_out.jump(continuation);
+
+                m_out.appendTo(testBooleanTrueCase, convertBooleanTrueCase);
+                LValue valueIsBooleanTrue = m_out.equal(value, m_out.constInt64(ValueTrue));
+                m_out.branch(valueIsBooleanTrue, unsure(convertBooleanTrueCase), unsure(convertBooleanFalseCase));
+
+                m_out.appendTo(convertBooleanTrueCase, convertBooleanFalseCase);
+                ValueFromBlock convertedTrue = m_out.anchor(m_out.constDouble(1));
+                m_out.jump(continuation);
+
+                m_out.appendTo(convertBooleanFalseCase, continuation);
+
+                LValue valueIsNotBooleanFalse = m_out.notEqual(value, m_out.constInt64(ValueFalse));
+                FTL_TYPE_CHECK(jsValueValue(value), m_node-&gt;child1(), ~SpecCell, valueIsNotBooleanFalse);
+                ValueFromBlock convertedFalse = m_out.anchor(m_out.constDouble(0));
+                m_out.jump(continuation);
+
+                m_out.appendTo(continuation, lastNext);
+                setDouble(m_out.phi(m_out.doubleType, intToDouble, unboxedDouble, convertedUndefined, convertedNull, convertedTrue, convertedFalse));
+                return;
+            }
+            m_out.appendTo(nonDoubleCase, continuation);
+            FTL_TYPE_CHECK(jsValueValue(value), m_node-&gt;child1(), SpecBytecodeNumber, m_out.booleanTrue);
+            m_out.unreachable();
+
+            m_out.appendTo(continuation, lastNext);
+
+            setDouble(m_out.phi(m_out.doubleType, intToDouble, unboxedDouble));
+            return;
+        }
+            
+        case Int52RepUse: {
+            setDouble(strictInt52ToDouble(lowStrictInt52(m_node-&gt;child1())));
+            return;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+        }
+    }
+
+    void compileDoubleAsInt32()
+    {
+        LValue integerValue = convertDoubleToInt32(lowDouble(m_node-&gt;child1()), shouldCheckNegativeZero(m_node-&gt;arithMode()));
+        setInt32(integerValue);
+    }
+
+    void compileValueRep()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case DoubleRepUse: {
+            LValue value = lowDouble(m_node-&gt;child1());
+            
+            if (m_interpreter.needsTypeCheck(m_node-&gt;child1(), ~SpecDoubleImpureNaN)) {
+                value = m_out.select(
+                    m_out.doubleEqual(value, value), value, m_out.constDouble(PNaN));
+            }
+            
+            setJSValue(boxDouble(value));
+            return;
+        }
+            
+        case Int52RepUse: {
+            setJSValue(strictInt52ToJSValue(lowStrictInt52(m_node-&gt;child1())));
+            return;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+        }
+    }
+    
+    void compileInt52Rep()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case Int32Use:
+            setStrictInt52(m_out.signExt32To64(lowInt32(m_node-&gt;child1())));
+            return;
+            
+        case MachineIntUse:
+            setStrictInt52(
+                jsValueToStrictInt52(
+                    m_node-&gt;child1(), lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation)));
+            return;
+            
+        case DoubleRepMachineIntUse:
+            setStrictInt52(
+                doubleToStrictInt52(
+                    m_node-&gt;child1(), lowDouble(m_node-&gt;child1())));
+            return;
+            
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+    }
+    
+    void compileValueToInt32()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case Int52RepUse:
+            setInt32(m_out.castToInt32(lowStrictInt52(m_node-&gt;child1())));
+            break;
+            
+        case DoubleRepUse:
+            setInt32(doubleToInt32(lowDouble(m_node-&gt;child1())));
+            break;
+            
+        case NumberUse:
+        case NotCellUse: {
+            LoweredNodeValue value = m_int32Values.get(m_node-&gt;child1().node());
+            if (isValid(value)) {
+                setInt32(value.value());
+                break;
+            }
+            
+            value = m_jsValueValues.get(m_node-&gt;child1().node());
+            if (isValid(value)) {
+                setInt32(numberOrNotCellToInt32(m_node-&gt;child1(), value.value()));
+                break;
+            }
+            
+            // We'll basically just get here for constants. But it's good to have this
+            // catch-all since we often add new representations into the mix.
+            setInt32(
+                numberOrNotCellToInt32(
+                    m_node-&gt;child1(),
+                    lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation)));
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+    
+    void compileBooleanToNumber()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case BooleanUse: {
+            setInt32(m_out.zeroExt(lowBoolean(m_node-&gt;child1()), m_out.int32));
+            return;
+        }
+            
+        case UntypedUse: {
+            LValue value = lowJSValue(m_node-&gt;child1());
+            
+            if (!m_interpreter.needsTypeCheck(m_node-&gt;child1(), SpecBoolInt32 | SpecBoolean)) {
+                setInt32(m_out.bitAnd(m_out.castToInt32(value), m_out.int32One));
+                return;
+            }
+            
+            LBasicBlock booleanCase = FTL_NEW_BLOCK(m_out, (&quot;BooleanToNumber boolean case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;BooleanToNumber continuation&quot;));
+            
+            ValueFromBlock notBooleanResult = m_out.anchor(value);
+            m_out.branch(
+                isBoolean(value, provenType(m_node-&gt;child1())),
+                unsure(booleanCase), unsure(continuation));
+            
+            LBasicBlock lastNext = m_out.appendTo(booleanCase, continuation);
+            ValueFromBlock booleanResult = m_out.anchor(m_out.bitOr(
+                m_out.zeroExt(unboxBoolean(value), m_out.int64), m_tagTypeNumber));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, booleanResult, notBooleanResult));
+            return;
+        }
+            
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+            return;
+        }
+    }
+
+    void compileExtractOSREntryLocal()
+    {
+        EncodedJSValue* buffer = static_cast&lt;EncodedJSValue*&gt;(
+            m_ftlState.jitCode-&gt;ftlForOSREntry()-&gt;entryBuffer()-&gt;dataBuffer());
+        setJSValue(m_out.load64(m_out.absolute(buffer + m_node-&gt;unlinkedLocal().toLocal())));
+    }
+    
+    void compileGetStack()
+    {
+        // GetLocals arise only for captured variables and arguments. For arguments, we might have
+        // already loaded it.
+        if (LValue value = m_loadedArgumentValues.get(m_node)) {
+            setJSValue(value);
+            return;
+        }
+        
+        StackAccessData* data = m_node-&gt;stackAccessData();
+        AbstractValue&amp; value = m_state.variables().operand(data-&gt;local);
+        
+        DFG_ASSERT(m_graph, m_node, isConcrete(data-&gt;format));
+        DFG_ASSERT(m_graph, m_node, data-&gt;format != FlushedDouble); // This just happens to not arise for GetStacks, right now. It would be trivial to support.
+        
+        if (isInt32Speculation(value.m_type))
+            setInt32(m_out.load32(payloadFor(data-&gt;machineLocal)));
+        else
+            setJSValue(m_out.load64(addressFor(data-&gt;machineLocal)));
+    }
+    
+    void compilePutStack()
+    {
+        StackAccessData* data = m_node-&gt;stackAccessData();
+        switch (data-&gt;format) {
+        case FlushedJSValue: {
+            LValue value = lowJSValue(m_node-&gt;child1());
+            m_out.store64(value, addressFor(data-&gt;machineLocal));
+            break;
+        }
+            
+        case FlushedDouble: {
+            LValue value = lowDouble(m_node-&gt;child1());
+            m_out.storeDouble(value, addressFor(data-&gt;machineLocal));
+            break;
+        }
+            
+        case FlushedInt32: {
+            LValue value = lowInt32(m_node-&gt;child1());
+            m_out.store32(value, payloadFor(data-&gt;machineLocal));
+            break;
+        }
+            
+        case FlushedInt52: {
+            LValue value = lowInt52(m_node-&gt;child1());
+            m_out.store64(value, addressFor(data-&gt;machineLocal));
+            break;
+        }
+            
+        case FlushedCell: {
+            LValue value = lowCell(m_node-&gt;child1());
+            m_out.store64(value, addressFor(data-&gt;machineLocal));
+            break;
+        }
+            
+        case FlushedBoolean: {
+            speculateBoolean(m_node-&gt;child1());
+            m_out.store64(
+                lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation),
+                addressFor(data-&gt;machineLocal));
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad flush format&quot;);
+            break;
+        }
+    }
+    
+    void compileNoOp()
+    {
+        DFG_NODE_DO_TO_CHILDREN(m_graph, m_node, speculate);
+    }
+    
+    void compileToThis()
+    {
+        LValue value = lowJSValue(m_node-&gt;child1());
+        
+        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;ToThis is cell case&quot;));
+        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ToThis slow case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToThis continuation&quot;));
+        
+        m_out.branch(
+            isCell(value, provenType(m_node-&gt;child1())), usually(isCellCase), rarely(slowCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(isCellCase, slowCase);
+        ValueFromBlock fastResult = m_out.anchor(value);
+        m_out.branch(isType(value, FinalObjectType), usually(continuation), rarely(slowCase));
+        
+        m_out.appendTo(slowCase, continuation);
+        J_JITOperation_EJ function;
+        if (m_graph.isStrictModeFor(m_node-&gt;origin.semantic))
+            function = operationToThisStrict;
+        else
+            function = operationToThis;
+        ValueFromBlock slowResult = m_out.anchor(
+            vmCall(m_out.int64, m_out.operation(function), m_callFrame, value));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
+    }
+    
+    void compileValueAdd()
+    {
+        emitBinarySnippet&lt;JITAddGenerator&gt;(operationValueAdd);
+    }
+    
+    void compileStrCat()
+    {
+        LValue result;
+        if (m_node-&gt;child3()) {
+            result = vmCall(
+                m_out.int64, m_out.operation(operationStrCat3), m_callFrame,
+                lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation),
+                lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation),
+                lowJSValue(m_node-&gt;child3(), ManualOperandSpeculation));
+        } else {
+            result = vmCall(
+                m_out.int64, m_out.operation(operationStrCat2), m_callFrame,
+                lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation),
+                lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation));
+        }
+        setJSValue(result);
+    }
+    
+    void compileArithAddOrSub()
+    {
+        bool isSub =  m_node-&gt;op() == ArithSub;
+        switch (m_node-&gt;binaryUseKind()) {
+        case Int32Use: {
+            LValue left = lowInt32(m_node-&gt;child1());
+            LValue right = lowInt32(m_node-&gt;child2());
+
+            if (!shouldCheckOverflow(m_node-&gt;arithMode())) {
+                setInt32(isSub ? m_out.sub(left, right) : m_out.add(left, right));
+                break;
+            }
+
+            CheckValue* result =
+                isSub ? m_out.speculateSub(left, right) : m_out.speculateAdd(left, right);
+            blessSpeculation(result, Overflow, noValue(), nullptr, m_origin);
+            setInt32(result);
+            break;
+        }
+            
+        case Int52RepUse: {
+            if (!abstractValue(m_node-&gt;child1()).couldBeType(SpecInt52)
+                &amp;&amp; !abstractValue(m_node-&gt;child2()).couldBeType(SpecInt52)) {
+                Int52Kind kind;
+                LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
+                LValue right = lowInt52(m_node-&gt;child2(), kind);
+                setInt52(isSub ? m_out.sub(left, right) : m_out.add(left, right), kind);
+                break;
+            }
+
+            LValue left = lowInt52(m_node-&gt;child1());
+            LValue right = lowInt52(m_node-&gt;child2());
+            CheckValue* result =
+                isSub ? m_out.speculateSub(left, right) : m_out.speculateAdd(left, right);
+            blessSpeculation(result, Overflow, noValue(), nullptr, m_origin);
+            setInt52(result);
+            break;
+        }
+            
+        case DoubleRepUse: {
+            LValue C1 = lowDouble(m_node-&gt;child1());
+            LValue C2 = lowDouble(m_node-&gt;child2());
+
+            setDouble(isSub ? m_out.doubleSub(C1, C2) : m_out.doubleAdd(C1, C2));
+            break;
+        }
+
+        case UntypedUse: {
+            if (!isSub) {
+                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+                break;
+            }
+
+            emitBinarySnippet&lt;JITSubGenerator&gt;(operationValueSub);
+            break;
+        }
+
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+
+    void compileArithClz32()
+    {
+        LValue operand = lowInt32(m_node-&gt;child1());
+        setInt32(m_out.ctlz32(operand));
+    }
+    
+    void compileArithMul()
+    {
+        switch (m_node-&gt;binaryUseKind()) {
+        case Int32Use: {
+            LValue left = lowInt32(m_node-&gt;child1());
+            LValue right = lowInt32(m_node-&gt;child2());
+            
+            LValue result;
+
+            if (!shouldCheckOverflow(m_node-&gt;arithMode()))
+                result = m_out.mul(left, right);
+            else {
+                CheckValue* speculation = m_out.speculateMul(left, right);
+                blessSpeculation(speculation, Overflow, noValue(), nullptr, m_origin);
+                result = speculation;
+            }
+            
+            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
+                LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ArithMul slow case&quot;));
+                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMul continuation&quot;));
+                
+                m_out.branch(
+                    m_out.notZero32(result), usually(continuation), rarely(slowCase));
+                
+                LBasicBlock lastNext = m_out.appendTo(slowCase, continuation);
+                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(left, m_out.int32Zero));
+                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(right, m_out.int32Zero));
+                m_out.jump(continuation);
+                m_out.appendTo(continuation, lastNext);
+            }
+            
+            setInt32(result);
+            break;
+        }
+            
+        case Int52RepUse: {
+            Int52Kind kind;
+            LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
+            LValue right = lowInt52(m_node-&gt;child2(), opposite(kind));
+
+            CheckValue* result = m_out.speculateMul(left, right);
+            blessSpeculation(result, Overflow, noValue(), nullptr, m_origin);
+
+            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
+                LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ArithMul slow case&quot;));
+                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMul continuation&quot;));
+                
+                m_out.branch(
+                    m_out.notZero64(result), usually(continuation), rarely(slowCase));
+                
+                LBasicBlock lastNext = m_out.appendTo(slowCase, continuation);
+                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(left, m_out.int64Zero));
+                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(right, m_out.int64Zero));
+                m_out.jump(continuation);
+                m_out.appendTo(continuation, lastNext);
+            }
+            
+            setInt52(result);
+            break;
+        }
+            
+        case DoubleRepUse: {
+            setDouble(
+                m_out.doubleMul(lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
+            break;
+        }
+
+        case UntypedUse: {
+            emitBinarySnippet&lt;JITMulGenerator&gt;(operationValueMul);
+            break;
+        }
+
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+
+    void compileArithDiv()
+    {
+        switch (m_node-&gt;binaryUseKind()) {
+        case Int32Use: {
+            LValue numerator = lowInt32(m_node-&gt;child1());
+            LValue denominator = lowInt32(m_node-&gt;child2());
+
+            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
+                LBasicBlock zeroNumerator = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv zero numerator&quot;));
+                LBasicBlock numeratorContinuation = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv numerator continuation&quot;));
+
+                m_out.branch(
+                    m_out.isZero32(numerator),
+                    rarely(zeroNumerator), usually(numeratorContinuation));
+
+                LBasicBlock innerLastNext = m_out.appendTo(zeroNumerator, numeratorContinuation);
+
+                speculate(
+                    NegativeZero, noValue(), 0, m_out.lessThan(denominator, m_out.int32Zero));
+
+                m_out.jump(numeratorContinuation);
+
+                m_out.appendTo(numeratorContinuation, innerLastNext);
+            }
+            
+            if (shouldCheckOverflow(m_node-&gt;arithMode())) {
+                LBasicBlock unsafeDenominator = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv unsafe denominator&quot;));
+                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv continuation&quot;));
+
+                LValue adjustedDenominator = m_out.add(denominator, m_out.int32One);
+                m_out.branch(
+                    m_out.above(adjustedDenominator, m_out.int32One),
+                    usually(continuation), rarely(unsafeDenominator));
+
+                LBasicBlock lastNext = m_out.appendTo(unsafeDenominator, continuation);
+                LValue neg2ToThe31 = m_out.constInt32(-2147483647-1);
+                speculate(Overflow, noValue(), nullptr, m_out.isZero32(denominator));
+                speculate(Overflow, noValue(), nullptr, m_out.equal(numerator, neg2ToThe31));
+                m_out.jump(continuation);
+
+                m_out.appendTo(continuation, lastNext);
+                LValue result = m_out.div(numerator, denominator);
+                speculate(
+                    Overflow, noValue(), 0,
+                    m_out.notEqual(m_out.mul(result, denominator), numerator));
+                setInt32(result);
+            } else
+                setInt32(m_out.chillDiv(numerator, denominator));
+
+            break;
+        }
+            
+        case DoubleRepUse: {
+            setDouble(m_out.doubleDiv(
+                lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
+            break;
+        }
+
+        case UntypedUse: {
+            emitBinarySnippet&lt;JITDivGenerator, NeedScratchFPR&gt;(operationValueDiv);
+            break;
+        }
+
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+    
+    void compileArithMod()
+    {
+        switch (m_node-&gt;binaryUseKind()) {
+        case Int32Use: {
+            LValue numerator = lowInt32(m_node-&gt;child1());
+            LValue denominator = lowInt32(m_node-&gt;child2());
+
+            LValue remainder;
+            if (shouldCheckOverflow(m_node-&gt;arithMode())) {
+                LBasicBlock unsafeDenominator = FTL_NEW_BLOCK(m_out, (&quot;ArithMod unsafe denominator&quot;));
+                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMod continuation&quot;));
+
+                LValue adjustedDenominator = m_out.add(denominator, m_out.int32One);
+                m_out.branch(
+                    m_out.above(adjustedDenominator, m_out.int32One),
+                    usually(continuation), rarely(unsafeDenominator));
+
+                LBasicBlock lastNext = m_out.appendTo(unsafeDenominator, continuation);
+                LValue neg2ToThe31 = m_out.constInt32(-2147483647-1);
+                speculate(Overflow, noValue(), nullptr, m_out.isZero32(denominator));
+                speculate(Overflow, noValue(), nullptr, m_out.equal(numerator, neg2ToThe31));
+                m_out.jump(continuation);
+
+                m_out.appendTo(continuation, lastNext);
+                LValue result = m_out.mod(numerator, denominator);
+                remainder = result;
+            } else
+                remainder = m_out.chillMod(numerator, denominator);
+
+            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
+                LBasicBlock negativeNumerator = FTL_NEW_BLOCK(m_out, (&quot;ArithMod negative numerator&quot;));
+                LBasicBlock numeratorContinuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMod numerator continuation&quot;));
+
+                m_out.branch(
+                    m_out.lessThan(numerator, m_out.int32Zero),
+                    unsure(negativeNumerator), unsure(numeratorContinuation));
+
+                LBasicBlock innerLastNext = m_out.appendTo(negativeNumerator, numeratorContinuation);
+
+                speculate(NegativeZero, noValue(), 0, m_out.isZero32(remainder));
+
+                m_out.jump(numeratorContinuation);
+
+                m_out.appendTo(numeratorContinuation, innerLastNext);
+            }
+
+            setInt32(remainder);
+            break;
+        }
+            
+        case DoubleRepUse: {
+            setDouble(
+                m_out.doubleMod(lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+
+    void compileArithMinOrMax()
+    {
+        switch (m_node-&gt;binaryUseKind()) {
+        case Int32Use: {
+            LValue left = lowInt32(m_node-&gt;child1());
+            LValue right = lowInt32(m_node-&gt;child2());
+            
+            setInt32(
+                m_out.select(
+                    m_node-&gt;op() == ArithMin
+                        ? m_out.lessThan(left, right)
+                        : m_out.lessThan(right, left),
+                    left, right));
+            break;
+        }
+            
+        case DoubleRepUse: {
+            LValue left = lowDouble(m_node-&gt;child1());
+            LValue right = lowDouble(m_node-&gt;child2());
+            
+            LBasicBlock notLessThan = FTL_NEW_BLOCK(m_out, (&quot;ArithMin/ArithMax not less than&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMin/ArithMax continuation&quot;));
+            
+            Vector&lt;ValueFromBlock, 2&gt; results;
+            
+            results.append(m_out.anchor(left));
+            m_out.branch(
+                m_node-&gt;op() == ArithMin
+                    ? m_out.doubleLessThan(left, right)
+                    : m_out.doubleGreaterThan(left, right),
+                unsure(continuation), unsure(notLessThan));
+            
+            LBasicBlock lastNext = m_out.appendTo(notLessThan, continuation);
+            results.append(m_out.anchor(m_out.select(
+                m_node-&gt;op() == ArithMin
+                    ? m_out.doubleGreaterThanOrEqual(left, right)
+                    : m_out.doubleLessThanOrEqual(left, right),
+                right, m_out.constDouble(PNaN))));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setDouble(m_out.phi(m_out.doubleType, results));
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+    
+    void compileArithAbs()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case Int32Use: {
+            LValue value = lowInt32(m_node-&gt;child1());
+
+            LValue mask = m_out.aShr(value, m_out.constInt32(31));
+            LValue result = m_out.bitXor(mask, m_out.add(mask, value));
+
+            if (shouldCheckOverflow(m_node-&gt;arithMode()))
+                speculate(Overflow, noValue(), 0, m_out.equal(result, m_out.constInt32(1 &lt;&lt; 31)));
+
+            setInt32(result);
+            break;
+        }
+            
+        case DoubleRepUse: {
+            setDouble(m_out.doubleAbs(lowDouble(m_node-&gt;child1())));
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+
+    void compileArithSin() { setDouble(m_out.doubleSin(lowDouble(m_node-&gt;child1()))); }
+
+    void compileArithCos() { setDouble(m_out.doubleCos(lowDouble(m_node-&gt;child1()))); }
+
+    void compileArithPow()
+    {
+        if (m_node-&gt;child2().useKind() == Int32Use)
+            setDouble(m_out.doublePowi(lowDouble(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
+        else {
+            LValue base = lowDouble(m_node-&gt;child1());
+            LValue exponent = lowDouble(m_node-&gt;child2());
+
+            LBasicBlock integerExponentIsSmallBlock = FTL_NEW_BLOCK(m_out, (&quot;ArithPow test integer exponent is small.&quot;));
+            LBasicBlock integerExponentPowBlock = FTL_NEW_BLOCK(m_out, (&quot;ArithPow pow(double, (int)double).&quot;));
+            LBasicBlock doubleExponentPowBlockEntry = FTL_NEW_BLOCK(m_out, (&quot;ArithPow pow(double, double).&quot;));
+            LBasicBlock nanExceptionExponentIsInfinity = FTL_NEW_BLOCK(m_out, (&quot;ArithPow NaN Exception, check exponent is infinity.&quot;));
+            LBasicBlock nanExceptionBaseIsOne = FTL_NEW_BLOCK(m_out, (&quot;ArithPow NaN Exception, check base is one.&quot;));
+            LBasicBlock powBlock = FTL_NEW_BLOCK(m_out, (&quot;ArithPow regular pow&quot;));
+            LBasicBlock nanExceptionResultIsNaN = FTL_NEW_BLOCK(m_out, (&quot;ArithPow NaN Exception, result is NaN.&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithPow continuation&quot;));
+
+            LValue integerExponent = m_out.doubleToInt(exponent);
+            LValue integerExponentConvertedToDouble = m_out.intToDouble(integerExponent);
+            LValue exponentIsInteger = m_out.doubleEqual(exponent, integerExponentConvertedToDouble);
+            m_out.branch(exponentIsInteger, unsure(integerExponentIsSmallBlock), unsure(doubleExponentPowBlockEntry));
+
+            LBasicBlock lastNext = m_out.appendTo(integerExponentIsSmallBlock, integerExponentPowBlock);
+            LValue integerExponentBelow1000 = m_out.below(integerExponent, m_out.constInt32(1000));
+            m_out.branch(integerExponentBelow1000, usually(integerExponentPowBlock), rarely(doubleExponentPowBlockEntry));
+
+            m_out.appendTo(integerExponentPowBlock, doubleExponentPowBlockEntry);
+            ValueFromBlock powDoubleIntResult = m_out.anchor(m_out.doublePowi(base, integerExponent));
+            m_out.jump(continuation);
+
+            // If y is NaN, the result is NaN.
+            m_out.appendTo(doubleExponentPowBlockEntry, nanExceptionExponentIsInfinity);
+            LValue exponentIsNaN;
+            if (provenType(m_node-&gt;child2()) &amp; SpecDoubleNaN)
+                exponentIsNaN = m_out.doubleNotEqualOrUnordered(exponent, exponent);
+            else
+                exponentIsNaN = m_out.booleanFalse;
+            m_out.branch(exponentIsNaN, rarely(nanExceptionResultIsNaN), usually(nanExceptionExponentIsInfinity));
+
+            // If abs(x) is 1 and y is +infinity, the result is NaN.
+            // If abs(x) is 1 and y is -infinity, the result is NaN.
+            m_out.appendTo(nanExceptionExponentIsInfinity, nanExceptionBaseIsOne);
+            LValue absoluteExponent = m_out.doubleAbs(exponent);
+            LValue absoluteExponentIsInfinity = m_out.doubleEqual(absoluteExponent, m_out.constDouble(std::numeric_limits&lt;double&gt;::infinity()));
+            m_out.branch(absoluteExponentIsInfinity, rarely(nanExceptionBaseIsOne), usually(powBlock));
+
+            m_out.appendTo(nanExceptionBaseIsOne, powBlock);
+            LValue absoluteBase = m_out.doubleAbs(base);
+            LValue absoluteBaseIsOne = m_out.doubleEqual(absoluteBase, m_out.constDouble(1));
+            m_out.branch(absoluteBaseIsOne, unsure(nanExceptionResultIsNaN), unsure(powBlock));
+
+            m_out.appendTo(powBlock, nanExceptionResultIsNaN);
+            ValueFromBlock powResult = m_out.anchor(m_out.doublePow(base, exponent));
+            m_out.jump(continuation);
+
+            m_out.appendTo(nanExceptionResultIsNaN, continuation);
+            ValueFromBlock pureNan = m_out.anchor(m_out.constDouble(PNaN));
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            setDouble(m_out.phi(m_out.doubleType, powDoubleIntResult, powResult, pureNan));
+        }
+    }
+
+    void compileArithRandom()
+    {
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+
+        // Inlined WeakRandom::advance().
+        // uint64_t x = m_low;
+        void* lowAddress = reinterpret_cast&lt;uint8_t*&gt;(globalObject) + JSGlobalObject::weakRandomOffset() + WeakRandom::lowOffset();
+        LValue low = m_out.load64(m_out.absolute(lowAddress));
+        // uint64_t y = m_high;
+        void* highAddress = reinterpret_cast&lt;uint8_t*&gt;(globalObject) + JSGlobalObject::weakRandomOffset() + WeakRandom::highOffset();
+        LValue high = m_out.load64(m_out.absolute(highAddress));
+        // m_low = y;
+        m_out.store64(high, m_out.absolute(lowAddress));
+
+        // x ^= x &lt;&lt; 23;
+        LValue phase1 = m_out.bitXor(m_out.shl(low, m_out.constInt64(23)), low);
+
+        // x ^= x &gt;&gt; 17;
+        LValue phase2 = m_out.bitXor(m_out.lShr(phase1, m_out.constInt64(17)), phase1);
+
+        // x ^= y ^ (y &gt;&gt; 26);
+        LValue phase3 = m_out.bitXor(m_out.bitXor(high, m_out.lShr(high, m_out.constInt64(26))), phase2);
+
+        // m_high = x;
+        m_out.store64(phase3, m_out.absolute(highAddress));
+
+        // return x + y;
+        LValue random64 = m_out.add(phase3, high);
+
+        // Extract random 53bit. [0, 53] bit is safe integer number ranges in double representation.
+        LValue random53 = m_out.bitAnd(random64, m_out.constInt64((1ULL &lt;&lt; 53) - 1));
+
+        LValue double53Integer = m_out.intToDouble(random53);
+
+        // Convert `(53bit double integer value) / (1 &lt;&lt; 53)` to `(53bit double integer value) * (1.0 / (1 &lt;&lt; 53))`.
+        // In latter case, `1.0 / (1 &lt;&lt; 53)` will become a double value represented as (mantissa = 0 &amp; exp = 970, it means 1e-(2**54)).
+        static const double scale = 1.0 / (1ULL &lt;&lt; 53);
+
+        // Multiplying 1e-(2**54) with the double integer does not change anything of the mantissa part of the double integer.
+        // It just reduces the exp part of the given 53bit double integer.
+        // (Except for 0.0. This is specially handled and in this case, exp just becomes 0.)
+        // Now we get 53bit precision random double value in [0, 1).
+        LValue result = m_out.doubleMul(double53Integer, m_out.constDouble(scale));
+
+        setDouble(result);
+    }
+
+    void compileArithRound()
+    {
+        LBasicBlock realPartIsMoreThanHalf = FTL_NEW_BLOCK(m_out, (&quot;ArithRound should round down&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithRound continuation&quot;));
+
+        LValue value = lowDouble(m_node-&gt;child1());
+        LValue integerValue = m_out.doubleCeil(value);
+        ValueFromBlock integerValueResult = m_out.anchor(integerValue);
+
+        LValue realPart = m_out.doubleSub(integerValue, value);
+
+        m_out.branch(m_out.doubleGreaterThanOrUnordered(realPart, m_out.constDouble(0.5)), unsure(realPartIsMoreThanHalf), unsure(continuation));
+
+        LBasicBlock lastNext = m_out.appendTo(realPartIsMoreThanHalf, continuation);
+        LValue integerValueRoundedDown = m_out.doubleSub(integerValue, m_out.constDouble(1));
+        ValueFromBlock integerValueRoundedDownResult = m_out.anchor(integerValueRoundedDown);
+        m_out.jump(continuation);
+        m_out.appendTo(continuation, lastNext);
+
+        LValue result = m_out.phi(m_out.doubleType, integerValueResult, integerValueRoundedDownResult);
+
+        if (producesInteger(m_node-&gt;arithRoundingMode())) {
+            LValue integerValue = convertDoubleToInt32(result, shouldCheckNegativeZero(m_node-&gt;arithRoundingMode()));
+            setInt32(integerValue);
+        } else
+            setDouble(result);
+    }
+
+    void compileArithSqrt() { setDouble(m_out.doubleSqrt(lowDouble(m_node-&gt;child1()))); }
+
+    void compileArithLog() { setDouble(m_out.doubleLog(lowDouble(m_node-&gt;child1()))); }
+    
+    void compileArithFRound()
+    {
+        setDouble(m_out.fround(lowDouble(m_node-&gt;child1())));
+    }
+    
+    void compileArithNegate()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case Int32Use: {
+            LValue value = lowInt32(m_node-&gt;child1());
+            
+            LValue result;
+            if (!shouldCheckOverflow(m_node-&gt;arithMode()))
+                result = m_out.neg(value);
+            else if (!shouldCheckNegativeZero(m_node-&gt;arithMode())) {
+                CheckValue* check = m_out.speculateSub(m_out.int32Zero, value);
+                blessSpeculation(check, Overflow, noValue(), nullptr, m_origin);
+                result = check;
+            } else {
+                speculate(Overflow, noValue(), 0, m_out.testIsZero32(value, m_out.constInt32(0x7fffffff)));
+                result = m_out.neg(value);
+            }
+
+            setInt32(result);
+            break;
+        }
+            
+        case Int52RepUse: {
+            if (!abstractValue(m_node-&gt;child1()).couldBeType(SpecInt52)) {
+                Int52Kind kind;
+                LValue value = lowWhicheverInt52(m_node-&gt;child1(), kind);
+                LValue result = m_out.neg(value);
+                if (shouldCheckNegativeZero(m_node-&gt;arithMode()))
+                    speculate(NegativeZero, noValue(), 0, m_out.isZero64(result));
+                setInt52(result, kind);
+                break;
+            }
+            
+            LValue value = lowInt52(m_node-&gt;child1());
+            CheckValue* result = m_out.speculateSub(m_out.int64Zero, value);
+            blessSpeculation(result, Int52Overflow, noValue(), nullptr, m_origin);
+            speculate(NegativeZero, noValue(), 0, m_out.isZero64(result));
+            setInt52(result);
+            break;
+        }
+            
+        case DoubleRepUse: {
+            setDouble(m_out.doubleNeg(lowDouble(m_node-&gt;child1())));
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+    
+    void compileBitAnd()
+    {
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            emitBinaryBitOpSnippet&lt;JITBitAndGenerator&gt;(operationValueBitAnd);
+            return;
+        }
+        setInt32(m_out.bitAnd(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
+    }
+    
+    void compileBitOr()
+    {
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            emitBinaryBitOpSnippet&lt;JITBitOrGenerator&gt;(operationValueBitOr);
+            return;
+        }
+        setInt32(m_out.bitOr(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
+    }
+    
+    void compileBitXor()
+    {
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            emitBinaryBitOpSnippet&lt;JITBitXorGenerator&gt;(operationValueBitXor);
+            return;
+        }
+        setInt32(m_out.bitXor(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
+    }
+    
+    void compileBitRShift()
+    {
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            emitRightShiftSnippet(JITRightShiftGenerator::SignedShift);
+            return;
+        }
+        setInt32(m_out.aShr(
+            lowInt32(m_node-&gt;child1()),
+            m_out.bitAnd(lowInt32(m_node-&gt;child2()), m_out.constInt32(31))));
+    }
+    
+    void compileBitLShift()
+    {
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            emitBinaryBitOpSnippet&lt;JITLeftShiftGenerator&gt;(operationValueBitLShift);
+            return;
+        }
+        setInt32(m_out.shl(
+            lowInt32(m_node-&gt;child1()),
+            m_out.bitAnd(lowInt32(m_node-&gt;child2()), m_out.constInt32(31))));
+    }
+    
+    void compileBitURShift()
+    {
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            emitRightShiftSnippet(JITRightShiftGenerator::UnsignedShift);
+            return;
+        }
+        setInt32(m_out.lShr(
+            lowInt32(m_node-&gt;child1()),
+            m_out.bitAnd(lowInt32(m_node-&gt;child2()), m_out.constInt32(31))));
+    }
+    
+    void compileUInt32ToNumber()
+    {
+        LValue value = lowInt32(m_node-&gt;child1());
+
+        if (doesOverflow(m_node-&gt;arithMode())) {
+            setDouble(m_out.unsignedToDouble(value));
+            return;
+        }
+        
+        speculate(Overflow, noValue(), 0, m_out.lessThan(value, m_out.int32Zero));
+        setInt32(value);
+    }
+    
+    void compileCheckStructure()
+    {
+        ExitKind exitKind;
+        if (m_node-&gt;child1()-&gt;hasConstant())
+            exitKind = BadConstantCache;
+        else
+            exitKind = BadCache;
+
+        switch (m_node-&gt;child1().useKind()) {
+        case CellUse:
+        case KnownCellUse: {
+            LValue cell = lowCell(m_node-&gt;child1());
+            
+            checkStructure(
+                m_out.load32(cell, m_heaps.JSCell_structureID), jsValueValue(cell),
+                exitKind, m_node-&gt;structureSet(),
+                [&amp;] (Structure* structure) {
+                    return weakStructureID(structure);
+                });
+            return;
+        }
+
+        case CellOrOtherUse: {
+            LValue value = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
+
+            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;CheckStructure CellOrOtherUse cell case&quot;));
+            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;CheckStructure CellOrOtherUse not cell case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CheckStructure CellOrOtherUse continuation&quot;));
+
+            m_out.branch(
+                isCell(value, provenType(m_node-&gt;child1())), unsure(cellCase), unsure(notCellCase));
+
+            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
+            checkStructure(
+                m_out.load32(value, m_heaps.JSCell_structureID), jsValueValue(value),
+                exitKind, m_node-&gt;structureSet(),
+                [&amp;] (Structure* structure) {
+                    return weakStructureID(structure);
+                });
+            m_out.jump(continuation);
+
+            m_out.appendTo(notCellCase, continuation);
+            FTL_TYPE_CHECK(jsValueValue(value), m_node-&gt;child1(), SpecCell | SpecOther, isNotOther(value));
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            return;
+        }
+
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            return;
+        }
+    }
+    
+    void compileCheckCell()
+    {
+        LValue cell = lowCell(m_node-&gt;child1());
+        
+        speculate(
+            BadCell, jsValueValue(cell), m_node-&gt;child1().node(),
+            m_out.notEqual(cell, weakPointer(m_node-&gt;cellOperand()-&gt;cell())));
+    }
+    
+    void compileCheckBadCell()
+    {
+        terminate(BadCell);
+    }
+
+    void compileCheckNotEmpty()
+    {
+        speculate(TDZFailure, noValue(), nullptr, m_out.isZero64(lowJSValue(m_node-&gt;child1())));
+    }
+
+    void compileCheckIdent()
+    {
+        UniquedStringImpl* uid = m_node-&gt;uidOperand();
+        if (uid-&gt;isSymbol()) {
+            LValue symbol = lowSymbol(m_node-&gt;child1());
+            LValue stringImpl = m_out.loadPtr(symbol, m_heaps.Symbol_privateName);
+            speculate(BadIdent, noValue(), nullptr, m_out.notEqual(stringImpl, m_out.constIntPtr(uid)));
+        } else {
+            LValue string = lowStringIdent(m_node-&gt;child1());
+            LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value);
+            speculate(BadIdent, noValue(), nullptr, m_out.notEqual(stringImpl, m_out.constIntPtr(uid)));
+        }
+    }
+
+    void compileGetExecutable()
+    {
+        LValue cell = lowCell(m_node-&gt;child1());
+        speculateFunction(m_node-&gt;child1(), cell);
+        setJSValue(m_out.loadPtr(cell, m_heaps.JSFunction_executable));
+    }
+    
+    void compileArrayifyToStructure()
+    {
+        LValue cell = lowCell(m_node-&gt;child1());
+        LValue property = !!m_node-&gt;child2() ? lowInt32(m_node-&gt;child2()) : 0;
+        
+        LBasicBlock unexpectedStructure = FTL_NEW_BLOCK(m_out, (&quot;ArrayifyToStructure unexpected structure&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArrayifyToStructure continuation&quot;));
+        
+        LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
+        
+        m_out.branch(
+            m_out.notEqual(structureID, weakStructureID(m_node-&gt;structure())),
+            rarely(unexpectedStructure), usually(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(unexpectedStructure, continuation);
+        
+        if (property) {
+            switch (m_node-&gt;arrayMode().type()) {
+            case Array::Int32:
+            case Array::Double:
+            case Array::Contiguous:
+                speculate(
+                    Uncountable, noValue(), 0,
+                    m_out.aboveOrEqual(property, m_out.constInt32(MIN_SPARSE_ARRAY_INDEX)));
+                break;
+            default:
+                break;
+            }
+        }
+        
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+            vmCall(m_out.voidType, m_out.operation(operationEnsureInt32), m_callFrame, cell);
+            break;
+        case Array::Double:
+            vmCall(m_out.voidType, m_out.operation(operationEnsureDouble), m_callFrame, cell);
+            break;
+        case Array::Contiguous:
+            vmCall(m_out.voidType, m_out.operation(operationEnsureContiguous), m_callFrame, cell);
+            break;
+        case Array::ArrayStorage:
+        case Array::SlowPutArrayStorage:
+            vmCall(m_out.voidType, m_out.operation(operationEnsureArrayStorage), m_callFrame, cell);
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            break;
+        }
+        
+        structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
+        speculate(
+            BadIndexingType, jsValueValue(cell), 0,
+            m_out.notEqual(structureID, weakStructureID(m_node-&gt;structure())));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void compilePutStructure()
+    {
+        m_ftlState.jitCode-&gt;common.notifyCompilingStructureTransition(m_graph.m_plan, codeBlock(), m_node);
+
+        Structure* oldStructure = m_node-&gt;transition()-&gt;previous;
+        Structure* newStructure = m_node-&gt;transition()-&gt;next;
+        ASSERT_UNUSED(oldStructure, oldStructure-&gt;indexingType() == newStructure-&gt;indexingType());
+        ASSERT(oldStructure-&gt;typeInfo().inlineTypeFlags() == newStructure-&gt;typeInfo().inlineTypeFlags());
+        ASSERT(oldStructure-&gt;typeInfo().type() == newStructure-&gt;typeInfo().type());
+
+        LValue cell = lowCell(m_node-&gt;child1()); 
+        m_out.store32(
+            weakStructureID(newStructure),
+            cell, m_heaps.JSCell_structureID);
+    }
+    
+    void compileGetById()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case CellUse: {
+            setJSValue(getById(lowCell(m_node-&gt;child1())));
+            return;
+        }
+            
+        case UntypedUse: {
+            // This is pretty weird, since we duplicate the slow path both here and in the
+            // code generated by the IC. We should investigate making this less bad.
+            // https://bugs.webkit.org/show_bug.cgi?id=127830
+            LValue value = lowJSValue(m_node-&gt;child1());
+            
+            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;GetById untyped cell case&quot;));
+            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;GetById untyped not cell case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetById untyped continuation&quot;));
+            
+            m_out.branch(
+                isCell(value, provenType(m_node-&gt;child1())), unsure(cellCase), unsure(notCellCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
+            ValueFromBlock cellResult = m_out.anchor(getById(value));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(notCellCase, continuation);
+            ValueFromBlock notCellResult = m_out.anchor(vmCall(
+                m_out.int64, m_out.operation(operationGetByIdGeneric),
+                m_callFrame, value,
+                m_out.constIntPtr(m_graph.identifiers()[m_node-&gt;identifierNumber()])));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, cellResult, notCellResult));
+            return;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            return;
+        }
+    }
+    
+    void compilePutById()
+    {
+        Node* node = m_node;
+        
+        // See above; CellUse is easier so we do only that for now.
+        ASSERT(node-&gt;child1().useKind() == CellUse);
+
+        LValue base = lowCell(node-&gt;child1());
+        LValue value = lowJSValue(node-&gt;child2());
+        auto uid = m_graph.identifiers()[node-&gt;identifierNumber()];
+
+        B3::PatchpointValue* patchpoint = m_out.patchpoint(Void);
+        patchpoint-&gt;appendSomeRegister(base);
+        patchpoint-&gt;appendSomeRegister(value);
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+
+        // FIXME: If this is a PutByIdFlush, we might want to late-clobber volatile registers.
+        // https://bugs.webkit.org/show_bug.cgi?id=152848
+
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+
+        State* state = &amp;m_ftlState;
+        ECMAMode ecmaMode = m_graph.executableFor(node-&gt;origin.semantic)-&gt;ecmaMode();
+        
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                CallSiteIndex callSiteIndex =
+                    state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(node-&gt;origin.semantic);
+
+                Box&lt;CCallHelpers::JumpList&gt; exceptions =
+                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
+
+                // JS setter call ICs generated by the PutById IC will need this.
+                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
+
+                auto generator = Box&lt;JITPutByIdGenerator&gt;::create(
+                    jit.codeBlock(), node-&gt;origin.semantic, callSiteIndex,
+                    params.unavailableRegisters(), JSValueRegs(params[0].gpr()),
+                    JSValueRegs(params[1].gpr()), GPRInfo::patchpointScratchRegister, ecmaMode,
+                    node-&gt;op() == PutByIdDirect ? Direct : NotDirect);
+
+                generator-&gt;generateFastPath(jit);
+                CCallHelpers::Label done = jit.label();
+
+                params.addLatePath(
+                    [=] (CCallHelpers&amp; jit) {
+                        AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                        generator-&gt;slowPathJump().link(&amp;jit);
+                        CCallHelpers::Label slowPathBegin = jit.label();
+                        CCallHelpers::Call slowPathCall = callOperation(
+                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
+                            exceptions.get(), generator-&gt;slowPathFunction(), InvalidGPRReg,
+                            CCallHelpers::TrustedImmPtr(generator-&gt;stubInfo()), params[1].gpr(),
+                            params[0].gpr(), CCallHelpers::TrustedImmPtr(uid)).call();
+                        jit.jump().linkTo(done, &amp;jit);
+
+                        generator-&gt;reportSlowPathCall(slowPathBegin, slowPathCall);
+
+                        jit.addLinkTask(
+                            [=] (LinkBuffer&amp; linkBuffer) {
+                                generator-&gt;finalize(linkBuffer);
+                            });
+                    });
+            });
+    }
+    
+    void compileGetButterfly()
+    {
+        setStorage(loadButterflyWithBarrier(lowCell(m_node-&gt;child1())));
+    }
+
+    void compileGetButterflyReadOnly()
+    {
+        setStorage(loadButterflyReadOnly(lowCell(m_node-&gt;child1())));
+    }
+    
+    void compileConstantStoragePointer()
+    {
+        setStorage(m_out.constIntPtr(m_node-&gt;storagePointer()));
+    }
+    
+    void compileGetIndexedPropertyStorage()
+    {
+        LValue cell = lowCell(m_node-&gt;child1());
+        
+        if (m_node-&gt;arrayMode().type() == Array::String) {
+            LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;GetIndexedPropertyStorage String slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetIndexedPropertyStorage String continuation&quot;));
+
+            LValue fastResultValue = m_out.loadPtr(cell, m_heaps.JSString_value);
+            ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+            
+            m_out.branch(
+                m_out.notNull(fastResultValue), usually(continuation), rarely(slowPath));
+            
+            LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
+            
+            ValueFromBlock slowResult = m_out.anchor(
+                vmCall(m_out.intPtr, m_out.operation(operationResolveRope), m_callFrame, cell));
+            
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            
+            setStorage(m_out.loadPtr(m_out.phi(m_out.intPtr, fastResult, slowResult), m_heaps.StringImpl_data));
+            return;
+        }
+        
+        setStorage(loadVectorWithBarrier(cell));
+    }
+    
+    void compileCheckArray()
+    {
+        Edge edge = m_node-&gt;child1();
+        LValue cell = lowCell(edge);
+        
+        if (m_node-&gt;arrayMode().alreadyChecked(m_graph, m_node, abstractValue(edge)))
+            return;
+        
+        speculate(
+            BadIndexingType, jsValueValue(cell), 0,
+            m_out.logicalNot(isArrayType(cell, m_node-&gt;arrayMode())));
+    }
+
+    void compileGetTypedArrayByteOffset()
+    {
+        LValue basePtr = lowCell(m_node-&gt;child1());    
+
+        LBasicBlock simpleCase = FTL_NEW_BLOCK(m_out, (&quot;GetTypedArrayByteOffset wasteless typed array&quot;));
+        LBasicBlock wastefulCase = FTL_NEW_BLOCK(m_out, (&quot;GetTypedArrayByteOffset wasteful typed array&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetTypedArrayByteOffset continuation&quot;));
+        
+        LValue mode = m_out.load32(basePtr, m_heaps.JSArrayBufferView_mode);
+        m_out.branch(
+            m_out.notEqual(mode, m_out.constInt32(WastefulTypedArray)),
+            unsure(simpleCase), unsure(wastefulCase));
+
+        LBasicBlock lastNext = m_out.appendTo(simpleCase, wastefulCase);
+
+        ValueFromBlock simpleOut = m_out.anchor(m_out.constIntPtr(0));
+
+        m_out.jump(continuation);
+
+        m_out.appendTo(wastefulCase, continuation);
+
+        LValue vectorPtr = loadVectorReadOnly(basePtr);
+        LValue butterflyPtr = loadButterflyReadOnly(basePtr);
+        LValue arrayBufferPtr = m_out.loadPtr(butterflyPtr, m_heaps.Butterfly_arrayBuffer);
+        LValue dataPtr = m_out.loadPtr(arrayBufferPtr, m_heaps.ArrayBuffer_data);
+
+        ValueFromBlock wastefulOut = m_out.anchor(m_out.sub(vectorPtr, dataPtr));
+
+        m_out.jump(continuation);
+        m_out.appendTo(continuation, lastNext);
+
+        setInt32(m_out.castToInt32(m_out.phi(m_out.intPtr, simpleOut, wastefulOut)));
+    }
+    
+    void compileGetArrayLength()
+    {
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+        case Array::Double:
+        case Array::Contiguous: {
+            setInt32(m_out.load32NonNegative(lowStorage(m_node-&gt;child2()), m_heaps.Butterfly_publicLength));
+            return;
+        }
+            
+        case Array::String: {
+            LValue string = lowCell(m_node-&gt;child1());
+            setInt32(m_out.load32NonNegative(string, m_heaps.JSString_length));
+            return;
+        }
+            
+        case Array::DirectArguments: {
+            LValue arguments = lowCell(m_node-&gt;child1());
+            speculate(
+                ExoticObjectMode, noValue(), nullptr,
+                m_out.notNull(m_out.loadPtr(arguments, m_heaps.DirectArguments_overrides)));
+            setInt32(m_out.load32NonNegative(arguments, m_heaps.DirectArguments_length));
+            return;
+        }
+            
+        case Array::ScopedArguments: {
+            LValue arguments = lowCell(m_node-&gt;child1());
+            speculate(
+                ExoticObjectMode, noValue(), nullptr,
+                m_out.notZero32(m_out.load8ZeroExt32(arguments, m_heaps.ScopedArguments_overrodeThings)));
+            setInt32(m_out.load32NonNegative(arguments, m_heaps.ScopedArguments_totalLength));
+            return;
+        }
+            
+        default:
+            if (m_node-&gt;arrayMode().isSomeTypedArrayView()) {
+                setInt32(
+                    m_out.load32NonNegative(lowCell(m_node-&gt;child1()), m_heaps.JSArrayBufferView_length));
+                return;
+            }
+            
+            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            return;
+        }
+    }
+    
+    void compileCheckInBounds()
+    {
+        speculate(
+            OutOfBounds, noValue(), 0,
+            m_out.aboveOrEqual(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
+    }
+    
+    void compileGetByVal()
+    {
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+        case Array::Contiguous: {
+            LValue index = lowInt32(m_node-&gt;child2());
+            LValue storage = lowStorage(m_node-&gt;child3());
+            
+            IndexedAbstractHeap&amp; heap = m_node-&gt;arrayMode().type() == Array::Int32 ?
+                m_heaps.indexedInt32Properties : m_heaps.indexedContiguousProperties;
+            
+            if (m_node-&gt;arrayMode().isInBounds()) {
+                LValue result = m_out.load64(baseIndex(heap, storage, index, m_node-&gt;child2()));
+                LValue isHole = m_out.isZero64(result);
+                if (m_node-&gt;arrayMode().isSaneChain()) {
+                    DFG_ASSERT(
+                        m_graph, m_node, m_node-&gt;arrayMode().type() == Array::Contiguous);
+                    result = m_out.select(
+                        isHole, m_out.constInt64(JSValue::encode(jsUndefined())), result);
+                } else
+                    speculate(LoadFromHole, noValue(), 0, isHole);
+                setJSValue(result);
+                return;
+            }
+            
+            LValue base = lowCell(m_node-&gt;child1());
+            
+            LBasicBlock fastCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal int/contiguous fast case&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal int/contiguous slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal int/contiguous continuation&quot;));
+            
+            m_out.branch(
+                m_out.aboveOrEqual(
+                    index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
+                rarely(slowCase), usually(fastCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(fastCase, slowCase);
+
+            LValue fastResultValue = m_out.load64(baseIndex(heap, storage, index, m_node-&gt;child2()));
+            ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+            m_out.branch(
+                m_out.isZero64(fastResultValue), rarely(slowCase), usually(continuation));
+            
+            m_out.appendTo(slowCase, continuation);
+            ValueFromBlock slowResult = m_out.anchor(
+                vmCall(m_out.int64, m_out.operation(operationGetByValArrayInt), m_callFrame, base, index));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
+            return;
+        }
+            
+        case Array::Double: {
+            LValue index = lowInt32(m_node-&gt;child2());
+            LValue storage = lowStorage(m_node-&gt;child3());
+            
+            IndexedAbstractHeap&amp; heap = m_heaps.indexedDoubleProperties;
+            
+            if (m_node-&gt;arrayMode().isInBounds()) {
+                LValue result = m_out.loadDouble(
+                    baseIndex(heap, storage, index, m_node-&gt;child2()));
+                
+                if (!m_node-&gt;arrayMode().isSaneChain()) {
+                    speculate(
+                        LoadFromHole, noValue(), 0,
+                        m_out.doubleNotEqualOrUnordered(result, result));
+                }
+                setDouble(result);
+                break;
+            }
+            
+            LValue base = lowCell(m_node-&gt;child1());
+            
+            LBasicBlock inBounds = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double in bounds&quot;));
+            LBasicBlock boxPath = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double boxing&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double continuation&quot;));
+            
+            m_out.branch(
+                m_out.aboveOrEqual(
+                    index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
+                rarely(slowCase), usually(inBounds));
+            
+            LBasicBlock lastNext = m_out.appendTo(inBounds, boxPath);
+            LValue doubleValue = m_out.loadDouble(
+                baseIndex(heap, storage, index, m_node-&gt;child2()));
+            m_out.branch(
+                m_out.doubleNotEqualOrUnordered(doubleValue, doubleValue),
+                rarely(slowCase), usually(boxPath));
+            
+            m_out.appendTo(boxPath, slowCase);
+            ValueFromBlock fastResult = m_out.anchor(boxDouble(doubleValue));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(slowCase, continuation);
+            ValueFromBlock slowResult = m_out.anchor(
+                vmCall(m_out.int64, m_out.operation(operationGetByValArrayInt), m_callFrame, base, index));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
+            return;
+        }
+
+        case Array::Undecided: {
+            LValue index = lowInt32(m_node-&gt;child2());
+
+            speculate(OutOfBounds, noValue(), m_node, m_out.lessThan(index, m_out.int32Zero));
+            setJSValue(m_out.constInt64(ValueUndefined));
+            return;
+        }
+            
+        case Array::DirectArguments: {
+            LValue base = lowCell(m_node-&gt;child1());
+            LValue index = lowInt32(m_node-&gt;child2());
+            
+            speculate(
+                ExoticObjectMode, noValue(), nullptr,
+                m_out.notNull(m_out.loadPtr(base, m_heaps.DirectArguments_overrides)));
+            speculate(
+                ExoticObjectMode, noValue(), nullptr,
+                m_out.aboveOrEqual(
+                    index,
+                    m_out.load32NonNegative(base, m_heaps.DirectArguments_length)));
+
+            TypedPointer address = m_out.baseIndex(
+                m_heaps.DirectArguments_storage, base, m_out.zeroExtPtr(index));
+            setJSValue(m_out.load64(address));
+            return;
+        }
+            
+        case Array::ScopedArguments: {
+            LValue base = lowCell(m_node-&gt;child1());
+            LValue index = lowInt32(m_node-&gt;child2());
+            
+            speculate(
+                ExoticObjectMode, noValue(), nullptr,
+                m_out.aboveOrEqual(
+                    index,
+                    m_out.load32NonNegative(base, m_heaps.ScopedArguments_totalLength)));
+            
+            LValue table = m_out.loadPtr(base, m_heaps.ScopedArguments_table);
+            LValue namedLength = m_out.load32(table, m_heaps.ScopedArgumentsTable_length);
+            
+            LBasicBlock namedCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal ScopedArguments named case&quot;));
+            LBasicBlock overflowCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal ScopedArguments overflow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal ScopedArguments continuation&quot;));
+            
+            m_out.branch(
+                m_out.aboveOrEqual(index, namedLength), unsure(overflowCase), unsure(namedCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(namedCase, overflowCase);
+            
+            LValue scope = m_out.loadPtr(base, m_heaps.ScopedArguments_scope);
+            LValue arguments = m_out.loadPtr(table, m_heaps.ScopedArgumentsTable_arguments);
+            
+            TypedPointer address = m_out.baseIndex(
+                m_heaps.scopedArgumentsTableArguments, arguments, m_out.zeroExtPtr(index));
+            LValue scopeOffset = m_out.load32(address);
+            
+            speculate(
+                ExoticObjectMode, noValue(), nullptr,
+                m_out.equal(scopeOffset, m_out.constInt32(ScopeOffset::invalidOffset)));
+            
+            address = m_out.baseIndex(
+                m_heaps.JSEnvironmentRecord_variables, scope, m_out.zeroExtPtr(scopeOffset));
+            ValueFromBlock namedResult = m_out.anchor(m_out.load64(address));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(overflowCase, continuation);
+            
+            address = m_out.baseIndex(
+                m_heaps.ScopedArguments_overflowStorage, base,
+                m_out.zeroExtPtr(m_out.sub(index, namedLength)));
+            LValue overflowValue = m_out.load64(address);
+            speculate(ExoticObjectMode, noValue(), nullptr, m_out.isZero64(overflowValue));
+            ValueFromBlock overflowResult = m_out.anchor(overflowValue);
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, namedResult, overflowResult));
+            return;
+        }
+            
+        case Array::Generic: {
+            setJSValue(vmCall(
+                m_out.int64, m_out.operation(operationGetByVal), m_callFrame,
+                lowJSValue(m_node-&gt;child1()), lowJSValue(m_node-&gt;child2())));
+            return;
+        }
+            
+        case Array::String: {
+            compileStringCharAt();
+            return;
+        }
+            
+        default: {
+            LValue index = lowInt32(m_node-&gt;child2());
+            LValue storage = lowStorage(m_node-&gt;child3());
+            
+            TypedArrayType type = m_node-&gt;arrayMode().typedArrayType();
+            
+            if (isTypedView(type)) {
+                TypedPointer pointer = TypedPointer(
+                    m_heaps.typedArrayProperties,
+                    m_out.add(
+                        storage,
+                        m_out.shl(
+                            m_out.zeroExtPtr(index),
+                            m_out.constIntPtr(logElementSize(type)))));
+                
+                if (isInt(type)) {
+                    LValue result;
+                    switch (elementSize(type)) {
+                    case 1:
+                        result = isSigned(type) ? m_out.load8SignExt32(pointer) :  m_out.load8ZeroExt32(pointer);
+                        break;
+                    case 2:
+                        result = isSigned(type) ? m_out.load16SignExt32(pointer) :  m_out.load16ZeroExt32(pointer);
+                        break;
+                    case 4:
+                        result = m_out.load32(pointer);
+                        break;
+                    default:
+                        DFG_CRASH(m_graph, m_node, &quot;Bad element size&quot;);
+                    }
+                    
+                    if (elementSize(type) &lt; 4 || isSigned(type)) {
+                        setInt32(result);
+                        return;
+                    }
+
+                    if (m_node-&gt;shouldSpeculateInt32()) {
+                        speculate(
+                            Overflow, noValue(), 0, m_out.lessThan(result, m_out.int32Zero));
+                        setInt32(result);
+                        return;
+                    }
+                    
+                    if (m_node-&gt;shouldSpeculateMachineInt()) {
+                        setStrictInt52(m_out.zeroExt(result, m_out.int64));
+                        return;
+                    }
+                    
+                    setDouble(m_out.unsignedToDouble(result));
+                    return;
+                }
+            
+                ASSERT(isFloat(type));
+                
+                LValue result;
+                switch (type) {
+                case TypeFloat32:
+                    result = m_out.floatToDouble(m_out.loadFloat(pointer));
+                    break;
+                case TypeFloat64:
+                    result = m_out.loadDouble(pointer);
+                    break;
+                default:
+                    DFG_CRASH(m_graph, m_node, &quot;Bad typed array type&quot;);
+                }
+                
+                setDouble(result);
+                return;
+            }
+            
+            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            return;
+        } }
+    }
+    
+    void compileGetMyArgumentByVal()
+    {
+        InlineCallFrame* inlineCallFrame = m_node-&gt;child1()-&gt;origin.semantic.inlineCallFrame;
+        
+        LValue index = lowInt32(m_node-&gt;child2());
+        
+        LValue limit;
+        if (inlineCallFrame &amp;&amp; !inlineCallFrame-&gt;isVarargs())
+            limit = m_out.constInt32(inlineCallFrame-&gt;arguments.size() - 1);
+        else {
+            VirtualRegister argumentCountRegister;
+            if (!inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = inlineCallFrame-&gt;argumentCountRegister;
+            limit = m_out.sub(m_out.load32(payloadFor(argumentCountRegister)), m_out.int32One);
+        }
+        
+        speculate(ExoticObjectMode, noValue(), 0, m_out.aboveOrEqual(index, limit));
+        
+        TypedPointer base;
+        if (inlineCallFrame) {
+            if (inlineCallFrame-&gt;arguments.size() &lt;= 1) {
+                // We should have already exited due to the bounds check, above. Just tell the
+                // compiler that anything dominated by this instruction is not reachable, so
+                // that we don't waste time generating such code. This will also plant some
+                // kind of crashing instruction so that if by some fluke the bounds check didn't
+                // work, we'll crash in an easy-to-see way.
+                didAlreadyTerminate();
+                return;
+            }
+            base = addressFor(inlineCallFrame-&gt;arguments[1].virtualRegister());
+        } else
+            base = addressFor(virtualRegisterForArgument(1));
+        
+        LValue pointer = m_out.baseIndex(
+            base.value(), m_out.zeroExt(index, m_out.intPtr), ScaleEight);
+        setJSValue(m_out.load64(TypedPointer(m_heaps.variables.atAnyIndex(), pointer)));
+    }
+    
+    void compilePutByVal()
+    {
+        Edge child1 = m_graph.varArgChild(m_node, 0);
+        Edge child2 = m_graph.varArgChild(m_node, 1);
+        Edge child3 = m_graph.varArgChild(m_node, 2);
+        Edge child4 = m_graph.varArgChild(m_node, 3);
+        Edge child5 = m_graph.varArgChild(m_node, 4);
+        
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Generic: {
+            V_JITOperation_EJJJ operation;
+            if (m_node-&gt;op() == PutByValDirect) {
+                if (m_graph.isStrictModeFor(m_node-&gt;origin.semantic))
+                    operation = operationPutByValDirectStrict;
+                else
+                    operation = operationPutByValDirectNonStrict;
+            } else {
+                if (m_graph.isStrictModeFor(m_node-&gt;origin.semantic))
+                    operation = operationPutByValStrict;
+                else
+                    operation = operationPutByValNonStrict;
+            }
+                
+            vmCall(
+                m_out.voidType, m_out.operation(operation), m_callFrame,
+                lowJSValue(child1), lowJSValue(child2), lowJSValue(child3));
+            return;
+        }
+            
+        default:
+            break;
+        }
+
+        LValue base = lowCell(child1);
+        LValue index = lowInt32(child2);
+        LValue storage = lowStorage(child4);
+        
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+        case Array::Double:
+        case Array::Contiguous: {
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal continuation&quot;));
+            LBasicBlock outerLastNext = m_out.appendTo(m_out.m_block, continuation);
+            
+            switch (m_node-&gt;arrayMode().type()) {
+            case Array::Int32:
+            case Array::Contiguous: {
+                LValue value = lowJSValue(child3, ManualOperandSpeculation);
+                
+                if (m_node-&gt;arrayMode().type() == Array::Int32)
+                    FTL_TYPE_CHECK(jsValueValue(value), child3, SpecInt32, isNotInt32(value));
+                
+                TypedPointer elementPointer = m_out.baseIndex(
+                    m_node-&gt;arrayMode().type() == Array::Int32 ?
+                    m_heaps.indexedInt32Properties : m_heaps.indexedContiguousProperties,
+                    storage, m_out.zeroExtPtr(index), provenValue(child2));
+                
+                if (m_node-&gt;op() == PutByValAlias) {
+                    m_out.store64(value, elementPointer);
+                    break;
+                }
+                
+                contiguousPutByValOutOfBounds(
+                    codeBlock()-&gt;isStrictMode()
+                    ? operationPutByValBeyondArrayBoundsStrict
+                    : operationPutByValBeyondArrayBoundsNonStrict,
+                    base, storage, index, value, continuation);
+                
+                m_out.store64(value, elementPointer);
+                break;
+            }
+                
+            case Array::Double: {
+                LValue value = lowDouble(child3);
+                
+                FTL_TYPE_CHECK(
+                    doubleValue(value), child3, SpecDoubleReal,
+                    m_out.doubleNotEqualOrUnordered(value, value));
+                
+                TypedPointer elementPointer = m_out.baseIndex(
+                    m_heaps.indexedDoubleProperties, storage, m_out.zeroExtPtr(index),
+                    provenValue(child2));
+                
+                if (m_node-&gt;op() == PutByValAlias) {
+                    m_out.storeDouble(value, elementPointer);
+                    break;
+                }
+                
+                contiguousPutByValOutOfBounds(
+                    codeBlock()-&gt;isStrictMode()
+                    ? operationPutDoubleByValBeyondArrayBoundsStrict
+                    : operationPutDoubleByValBeyondArrayBoundsNonStrict,
+                    base, storage, index, value, continuation);
+                
+                m_out.storeDouble(value, elementPointer);
+                break;
+            }
+                
+            default:
+                DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            }
+
+            m_out.jump(continuation);
+            m_out.appendTo(continuation, outerLastNext);
+            return;
+        }
+            
+        default:
+            TypedArrayType type = m_node-&gt;arrayMode().typedArrayType();
+            
+            if (isTypedView(type)) {
+                TypedPointer pointer = TypedPointer(
+                    m_heaps.typedArrayProperties,
+                    m_out.add(
+                        storage,
+                        m_out.shl(
+                            m_out.zeroExt(index, m_out.intPtr),
+                            m_out.constIntPtr(logElementSize(type)))));
+                
+                Output::StoreType storeType;
+                LValue valueToStore;
+                
+                if (isInt(type)) {
+                    LValue intValue;
+                    switch (child3.useKind()) {
+                    case Int52RepUse:
+                    case Int32Use: {
+                        if (child3.useKind() == Int32Use)
+                            intValue = lowInt32(child3);
+                        else
+                            intValue = m_out.castToInt32(lowStrictInt52(child3));
+
+                        if (isClamped(type)) {
+                            ASSERT(elementSize(type) == 1);
+                            
+                            LBasicBlock atLeastZero = FTL_NEW_BLOCK(m_out, (&quot;PutByVal int clamp atLeastZero&quot;));
+                            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal int clamp continuation&quot;));
+                            
+                            Vector&lt;ValueFromBlock, 2&gt; intValues;
+                            intValues.append(m_out.anchor(m_out.int32Zero));
+                            m_out.branch(
+                                m_out.lessThan(intValue, m_out.int32Zero),
+                                unsure(continuation), unsure(atLeastZero));
+                            
+                            LBasicBlock lastNext = m_out.appendTo(atLeastZero, continuation);
+                            
+                            intValues.append(m_out.anchor(m_out.select(
+                                m_out.greaterThan(intValue, m_out.constInt32(255)),
+                                m_out.constInt32(255),
+                                intValue)));
+                            m_out.jump(continuation);
+                            
+                            m_out.appendTo(continuation, lastNext);
+                            intValue = m_out.phi(m_out.int32, intValues);
+                        }
+                        break;
+                    }
+                        
+                    case DoubleRepUse: {
+                        LValue doubleValue = lowDouble(child3);
+                        
+                        if (isClamped(type)) {
+                            ASSERT(elementSize(type) == 1);
+                            
+                            LBasicBlock atLeastZero = FTL_NEW_BLOCK(m_out, (&quot;PutByVal double clamp atLeastZero&quot;));
+                            LBasicBlock withinRange = FTL_NEW_BLOCK(m_out, (&quot;PutByVal double clamp withinRange&quot;));
+                            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal double clamp continuation&quot;));
+                            
+                            Vector&lt;ValueFromBlock, 3&gt; intValues;
+                            intValues.append(m_out.anchor(m_out.int32Zero));
+                            m_out.branch(
+                                m_out.doubleLessThanOrUnordered(doubleValue, m_out.doubleZero),
+                                unsure(continuation), unsure(atLeastZero));
+                            
+                            LBasicBlock lastNext = m_out.appendTo(atLeastZero, withinRange);
+                            intValues.append(m_out.anchor(m_out.constInt32(255)));
+                            m_out.branch(
+                                m_out.doubleGreaterThan(doubleValue, m_out.constDouble(255)),
+                                unsure(continuation), unsure(withinRange));
+                            
+                            m_out.appendTo(withinRange, continuation);
+                            intValues.append(m_out.anchor(m_out.doubleToInt(doubleValue)));
+                            m_out.jump(continuation);
+                            
+                            m_out.appendTo(continuation, lastNext);
+                            intValue = m_out.phi(m_out.int32, intValues);
+                        } else
+                            intValue = doubleToInt32(doubleValue);
+                        break;
+                    }
+                        
+                    default:
+                        DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+                    }
+
+                    valueToStore = intValue;
+                    switch (elementSize(type)) {
+                    case 1:
+                        storeType = Output::Store32As8;
+                        break;
+                    case 2:
+                        storeType = Output::Store32As16;
+                        break;
+                    case 4:
+                        storeType = Output::Store32;
+                        break;
+                    default:
+                        DFG_CRASH(m_graph, m_node, &quot;Bad element size&quot;);
+                    }
+                } else /* !isInt(type) */ {
+                    LValue value = lowDouble(child3);
+                    switch (type) {
+                    case TypeFloat32:
+                        valueToStore = m_out.doubleToFloat(value);
+                        storeType = Output::StoreFloat;
+                        break;
+                    case TypeFloat64:
+                        valueToStore = value;
+                        storeType = Output::StoreDouble;
+                        break;
+                    default:
+                        DFG_CRASH(m_graph, m_node, &quot;Bad typed array type&quot;);
+                    }
+                }
+                
+                if (m_node-&gt;arrayMode().isInBounds() || m_node-&gt;op() == PutByValAlias)
+                    m_out.store(valueToStore, pointer, storeType);
+                else {
+                    LBasicBlock isInBounds = FTL_NEW_BLOCK(m_out, (&quot;PutByVal typed array in bounds case&quot;));
+                    LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal typed array continuation&quot;));
+                    
+                    m_out.branch(
+                        m_out.aboveOrEqual(index, lowInt32(child5)),
+                        unsure(continuation), unsure(isInBounds));
+                    
+                    LBasicBlock lastNext = m_out.appendTo(isInBounds, continuation);
+                    m_out.store(valueToStore, pointer, storeType);
+                    m_out.jump(continuation);
+                    
+                    m_out.appendTo(continuation, lastNext);
+                }
+                
+                return;
+            }
+            
+            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            break;
+        }
+    }
+
+    void compilePutAccessorById()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue accessor = lowCell(m_node-&gt;child2());
+        auto uid = m_graph.identifiers()[m_node-&gt;identifierNumber()];
+        vmCall(
+            m_out.voidType,
+            m_out.operation(m_node-&gt;op() == PutGetterById ? operationPutGetterById : operationPutSetterById),
+            m_callFrame, base, m_out.constIntPtr(uid), m_out.constInt32(m_node-&gt;accessorAttributes()), accessor);
+    }
+
+    void compilePutGetterSetterById()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue getter = lowJSValue(m_node-&gt;child2());
+        LValue setter = lowJSValue(m_node-&gt;child3());
+        auto uid = m_graph.identifiers()[m_node-&gt;identifierNumber()];
+        vmCall(
+            m_out.voidType, m_out.operation(operationPutGetterSetter),
+            m_callFrame, base, m_out.constIntPtr(uid), m_out.constInt32(m_node-&gt;accessorAttributes()), getter, setter);
+
+    }
+
+    void compilePutAccessorByVal()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue subscript = lowJSValue(m_node-&gt;child2());
+        LValue accessor = lowCell(m_node-&gt;child3());
+        vmCall(
+            m_out.voidType,
+            m_out.operation(m_node-&gt;op() == PutGetterByVal ? operationPutGetterByVal : operationPutSetterByVal),
+            m_callFrame, base, subscript, m_out.constInt32(m_node-&gt;accessorAttributes()), accessor);
+    }
+    
+    void compileArrayPush()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue storage = lowStorage(m_node-&gt;child3());
+        
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+        case Array::Contiguous:
+        case Array::Double: {
+            LValue value;
+            Output::StoreType storeType;
+            
+            if (m_node-&gt;arrayMode().type() != Array::Double) {
+                value = lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation);
+                if (m_node-&gt;arrayMode().type() == Array::Int32) {
+                    FTL_TYPE_CHECK(
+                        jsValueValue(value), m_node-&gt;child2(), SpecInt32, isNotInt32(value));
+                }
+                storeType = Output::Store64;
+            } else {
+                value = lowDouble(m_node-&gt;child2());
+                FTL_TYPE_CHECK(
+                    doubleValue(value), m_node-&gt;child2(), SpecDoubleReal,
+                    m_out.doubleNotEqualOrUnordered(value, value));
+                storeType = Output::StoreDouble;
+            }
+            
+            IndexedAbstractHeap&amp; heap = m_heaps.forArrayType(m_node-&gt;arrayMode().type());
+
+            LValue prevLength = m_out.load32(storage, m_heaps.Butterfly_publicLength);
+            
+            LBasicBlock fastPath = FTL_NEW_BLOCK(m_out, (&quot;ArrayPush fast path&quot;));
+            LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;ArrayPush slow path&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArrayPush continuation&quot;));
+            
+            m_out.branch(
+                m_out.aboveOrEqual(
+                    prevLength, m_out.load32(storage, m_heaps.Butterfly_vectorLength)),
+                unsure(slowPath), unsure(fastPath));
+            
+            LBasicBlock lastNext = m_out.appendTo(fastPath, slowPath);
+            m_out.store(
+                value, m_out.baseIndex(heap, storage, m_out.zeroExtPtr(prevLength)), storeType);
+            LValue newLength = m_out.add(prevLength, m_out.int32One);
+            m_out.store32(newLength, storage, m_heaps.Butterfly_publicLength);
+            
+            ValueFromBlock fastResult = m_out.anchor(boxInt32(newLength));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(slowPath, continuation);
+            LValue operation;
+            if (m_node-&gt;arrayMode().type() != Array::Double)
+                operation = m_out.operation(operationArrayPush);
+            else
+                operation = m_out.operation(operationArrayPushDouble);
+            ValueFromBlock slowResult = m_out.anchor(
+                vmCall(m_out.int64, operation, m_callFrame, value, base));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
+            return;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            return;
+        }
+    }
+    
+    void compileArrayPop()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue storage = lowStorage(m_node-&gt;child2());
+        
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+        case Array::Double:
+        case Array::Contiguous: {
+            IndexedAbstractHeap&amp; heap = m_heaps.forArrayType(m_node-&gt;arrayMode().type());
+            
+            LBasicBlock fastCase = FTL_NEW_BLOCK(m_out, (&quot;ArrayPop fast case&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ArrayPop slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArrayPop continuation&quot;));
+            
+            LValue prevLength = m_out.load32(storage, m_heaps.Butterfly_publicLength);
+            
+            Vector&lt;ValueFromBlock, 3&gt; results;
+            results.append(m_out.anchor(m_out.constInt64(JSValue::encode(jsUndefined()))));
+            m_out.branch(
+                m_out.isZero32(prevLength), rarely(continuation), usually(fastCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(fastCase, slowCase);
+            LValue newLength = m_out.sub(prevLength, m_out.int32One);
+            m_out.store32(newLength, storage, m_heaps.Butterfly_publicLength);
+            TypedPointer pointer = m_out.baseIndex(heap, storage, m_out.zeroExtPtr(newLength));
+            if (m_node-&gt;arrayMode().type() != Array::Double) {
+                LValue result = m_out.load64(pointer);
+                m_out.store64(m_out.int64Zero, pointer);
+                results.append(m_out.anchor(result));
+                m_out.branch(
+                    m_out.notZero64(result), usually(continuation), rarely(slowCase));
+            } else {
+                LValue result = m_out.loadDouble(pointer);
+                m_out.store64(m_out.constInt64(bitwise_cast&lt;int64_t&gt;(PNaN)), pointer);
+                results.append(m_out.anchor(boxDouble(result)));
+                m_out.branch(
+                    m_out.doubleEqual(result, result),
+                    usually(continuation), rarely(slowCase));
+            }
+            
+            m_out.appendTo(slowCase, continuation);
+            results.append(m_out.anchor(vmCall(
+                m_out.int64, m_out.operation(operationArrayPopAndRecoverLength), m_callFrame, base)));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, results));
+            return;
+        }
+
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
+            return;
+        }
+    }
+    
+    void compileCreateActivation()
+    {
+        LValue scope = lowCell(m_node-&gt;child1());
+        SymbolTable* table = m_node-&gt;castOperand&lt;SymbolTable*&gt;();
+        Structure* structure = m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;activationStructure();
+        JSValue initializationValue = m_node-&gt;initializationValueForActivation();
+        ASSERT(initializationValue.isUndefined() || initializationValue == jsTDZValue());
+        if (table-&gt;singletonScope()-&gt;isStillValid()) {
+            LValue callResult = vmCall(
+                m_out.int64,
+                m_out.operation(operationCreateActivationDirect), m_callFrame, weakPointer(structure),
+                scope, weakPointer(table), m_out.constInt64(JSValue::encode(initializationValue)));
+            setJSValue(callResult);
+            return;
+        }
+        
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;CreateActivation slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CreateActivation continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        LValue fastObject = allocateObject&lt;JSLexicalEnvironment&gt;(
+            JSLexicalEnvironment::allocationSize(table), structure, m_out.intPtrZero, slowPath);
+        
+        // We don't need memory barriers since we just fast-created the activation, so the
+        // activation must be young.
+        m_out.storePtr(scope, fastObject, m_heaps.JSScope_next);
+        m_out.storePtr(weakPointer(table), fastObject, m_heaps.JSSymbolTableObject_symbolTable);
+        
+        for (unsigned i = 0; i &lt; table-&gt;scopeSize(); ++i) {
+            m_out.store64(
+                m_out.constInt64(JSValue::encode(initializationValue)),
+                fastObject, m_heaps.JSEnvironmentRecord_variables[i]);
+        }
+        
+        ValueFromBlock fastResult = m_out.anchor(fastObject);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationCreateActivationDirect, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
+                    CCallHelpers::TrustedImmPtr(table),
+                    CCallHelpers::TrustedImm64(JSValue::encode(initializationValue)));
+            },
+            scope);
+        ValueFromBlock slowResult = m_out.anchor(callResult);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
+    }
+    
+    void compileNewFunction()
+    {
+        ASSERT(m_node-&gt;op() == NewFunction || m_node-&gt;op() == NewArrowFunction || m_node-&gt;op() == NewGeneratorFunction);
+        bool isGeneratorFunction = m_node-&gt;op() == NewGeneratorFunction;
+        
+        LValue scope = lowCell(m_node-&gt;child1());
+        
+        FunctionExecutable* executable = m_node-&gt;castOperand&lt;FunctionExecutable*&gt;();
+        if (executable-&gt;singletonFunction()-&gt;isStillValid()) {
+            LValue callResult =
+                isGeneratorFunction ? vmCall(m_out.int64, m_out.operation(operationNewGeneratorFunction), m_callFrame, scope, weakPointer(executable)) :
+                vmCall(m_out.int64, m_out.operation(operationNewFunction), m_callFrame, scope, weakPointer(executable));
+            setJSValue(callResult);
+            return;
+        }
+        
+        Structure* structure =
+            isGeneratorFunction ? m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;generatorFunctionStructure() :
+            m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;functionStructure();
+        
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;NewFunction slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NewFunction continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        LValue fastObject =
+            isGeneratorFunction ? allocateObject&lt;JSGeneratorFunction&gt;(structure, m_out.intPtrZero, slowPath) :
+            allocateObject&lt;JSFunction&gt;(structure, m_out.intPtrZero, slowPath);
+        
+        
+        // We don't need memory barriers since we just fast-created the function, so it
+        // must be young.
+        m_out.storePtr(scope, fastObject, m_heaps.JSFunction_scope);
+        m_out.storePtr(weakPointer(executable), fastObject, m_heaps.JSFunction_executable);
+        m_out.storePtr(m_out.intPtrZero, fastObject, m_heaps.JSFunction_rareData);
+        
+        ValueFromBlock fastResult = m_out.anchor(fastObject);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+
+        Vector&lt;LValue&gt; slowPathArguments;
+        slowPathArguments.append(scope);
+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                if (isGeneratorFunction) {
+                    return createLazyCallGenerator(
+                        operationNewGeneratorFunctionWithInvalidatedReallocationWatchpoint,
+                        locations[0].directGPR(), locations[1].directGPR(),
+                        CCallHelpers::TrustedImmPtr(executable));
+                }
+                return createLazyCallGenerator(
+                    operationNewFunctionWithInvalidatedReallocationWatchpoint,
+                    locations[0].directGPR(), locations[1].directGPR(),
+                    CCallHelpers::TrustedImmPtr(executable));
+            },
+            slowPathArguments);
+        ValueFromBlock slowResult = m_out.anchor(callResult);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
+    }
+    
+    void compileCreateDirectArguments()
+    {
+        // FIXME: A more effective way of dealing with the argument count and callee is to have
+        // them be explicit arguments to this node.
+        // https://bugs.webkit.org/show_bug.cgi?id=142207
+        
+        Structure* structure =
+            m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;directArgumentsStructure();
+        
+        unsigned minCapacity = m_graph.baselineCodeBlockFor(m_node-&gt;origin.semantic)-&gt;numParameters() - 1;
+        
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        ArgumentsLength length = getArgumentsLength();
+        
+        LValue fastObject;
+        if (length.isKnown) {
+            fastObject = allocateObject&lt;DirectArguments&gt;(
+                DirectArguments::allocationSize(std::max(length.known, minCapacity)), structure,
+                m_out.intPtrZero, slowPath);
+        } else {
+            LValue size = m_out.add(
+                m_out.shl(length.value, m_out.constInt32(3)),
+                m_out.constInt32(DirectArguments::storageOffset()));
+            
+            size = m_out.select(
+                m_out.aboveOrEqual(length.value, m_out.constInt32(minCapacity)),
+                size, m_out.constInt32(DirectArguments::allocationSize(minCapacity)));
+            
+            fastObject = allocateVariableSizedObject&lt;DirectArguments&gt;(
+                size, structure, m_out.intPtrZero, slowPath);
+        }
+        
+        m_out.store32(length.value, fastObject, m_heaps.DirectArguments_length);
+        m_out.store32(m_out.constInt32(minCapacity), fastObject, m_heaps.DirectArguments_minCapacity);
+        m_out.storePtr(m_out.intPtrZero, fastObject, m_heaps.DirectArguments_overrides);
+        
+        ValueFromBlock fastResult = m_out.anchor(fastObject);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationCreateDirectArguments, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
+                    CCallHelpers::TrustedImm32(minCapacity));
+            }, length.value);
+        ValueFromBlock slowResult = m_out.anchor(callResult);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        LValue result = m_out.phi(m_out.intPtr, fastResult, slowResult);
+
+        m_out.storePtr(getCurrentCallee(), result, m_heaps.DirectArguments_callee);
+        
+        if (length.isKnown) {
+            VirtualRegister start = AssemblyHelpers::argumentsStart(m_node-&gt;origin.semantic);
+            for (unsigned i = 0; i &lt; std::max(length.known, minCapacity); ++i) {
+                m_out.store64(
+                    m_out.load64(addressFor(start + i)),
+                    result, m_heaps.DirectArguments_storage[i]);
+            }
+        } else {
+            LValue stackBase = getArgumentsStart();
+            
+            LBasicBlock loop = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments loop body&quot;));
+            LBasicBlock end = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments loop end&quot;));
+
+            ValueFromBlock originalLength;
+            if (minCapacity) {
+                LValue capacity = m_out.select(
+                    m_out.aboveOrEqual(length.value, m_out.constInt32(minCapacity)),
+                    length.value,
+                    m_out.constInt32(minCapacity));
+                LValue originalLengthValue = m_out.zeroExtPtr(capacity);
+                originalLength = m_out.anchor(originalLengthValue);
+                m_out.jump(loop);
+            } else {
+                LValue originalLengthValue = m_out.zeroExtPtr(length.value);
+                originalLength = m_out.anchor(originalLengthValue);
+                m_out.branch(m_out.isNull(originalLengthValue), unsure(end), unsure(loop));
+            }
+            
+            lastNext = m_out.appendTo(loop, end);
+            LValue previousIndex = m_out.phi(m_out.intPtr, originalLength);
+            LValue index = m_out.sub(previousIndex, m_out.intPtrOne);
+            m_out.store64(
+                m_out.load64(m_out.baseIndex(m_heaps.variables, stackBase, index)),
+                m_out.baseIndex(m_heaps.DirectArguments_storage, result, index));
+            ValueFromBlock nextIndex = m_out.anchor(index);
+            m_out.addIncomingToPhi(previousIndex, nextIndex);
+            m_out.branch(m_out.isNull(index), unsure(end), unsure(loop));
+            
+            m_out.appendTo(end, lastNext);
+        }
+        
+        setJSValue(result);
+    }
+    
+    void compileCreateScopedArguments()
+    {
+        LValue scope = lowCell(m_node-&gt;child1());
+        
+        LValue result = vmCall(
+            m_out.int64, m_out.operation(operationCreateScopedArguments), m_callFrame,
+            weakPointer(
+                m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;scopedArgumentsStructure()),
+            getArgumentsStart(), getArgumentsLength().value, getCurrentCallee(), scope);
+        
+        setJSValue(result);
+    }
+    
+    void compileCreateClonedArguments()
+    {
+        LValue result = vmCall(
+            m_out.int64, m_out.operation(operationCreateClonedArguments), m_callFrame,
+            weakPointer(
+                m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;outOfBandArgumentsStructure()),
+            getArgumentsStart(), getArgumentsLength().value, getCurrentCallee());
+        
+        setJSValue(result);
+    }
+
+    void compileCopyRest()
+    {            
+        LBasicBlock doCopyRest = FTL_NEW_BLOCK(m_out, (&quot;CopyRest C call&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;FillRestParameter continuation&quot;));
+
+        LValue arrayLength = lowInt32(m_node-&gt;child2());
+
+        m_out.branch(
+            m_out.equal(arrayLength, m_out.constInt32(0)),
+            unsure(continuation), unsure(doCopyRest));
+            
+        LBasicBlock lastNext = m_out.appendTo(doCopyRest, continuation);
+        // Arguments: 0:exec, 1:JSCell* array, 2:arguments start, 3:number of arguments to skip, 4:array length
+        LValue numberOfArgumentsToSkip = m_out.constInt32(m_node-&gt;numberOfArgumentsToSkip());
+        vmCall(
+            m_out.voidType,m_out.operation(operationCopyRest), m_callFrame, lowCell(m_node-&gt;child1()),
+            getArgumentsStart(), numberOfArgumentsToSkip, arrayLength);
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+    }
+
+    void compileGetRestLength()
+    {
+        LBasicBlock nonZeroLength = FTL_NEW_BLOCK(m_out, (&quot;GetRestLength non zero&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetRestLength continuation&quot;));
+        
+        ValueFromBlock zeroLengthResult = m_out.anchor(m_out.constInt32(0));
+
+        LValue numberOfArgumentsToSkip = m_out.constInt32(m_node-&gt;numberOfArgumentsToSkip());
+        LValue argumentsLength = getArgumentsLength().value;
+        m_out.branch(m_out.above(argumentsLength, numberOfArgumentsToSkip),
+            unsure(nonZeroLength), unsure(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(nonZeroLength, continuation);
+        ValueFromBlock nonZeroLengthResult = m_out.anchor(m_out.sub(argumentsLength, numberOfArgumentsToSkip));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setInt32(m_out.phi(m_out.int32, zeroLengthResult, nonZeroLengthResult));
+    }
+    
+    void compileNewObject()
+    {
+        setJSValue(allocateObject(m_node-&gt;structure()));
+    }
+    
+    void compileNewArray()
+    {
+        // First speculate appropriately on all of the children. Do this unconditionally up here
+        // because some of the slow paths may otherwise forget to do it. It's sort of arguable
+        // that doing the speculations up here might be unprofitable for RA - so we can consider
+        // sinking this to below the allocation fast path if we find that this has a lot of
+        // register pressure.
+        for (unsigned operandIndex = 0; operandIndex &lt; m_node-&gt;numChildren(); ++operandIndex)
+            speculate(m_graph.varArgChild(m_node, operandIndex));
+        
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        Structure* structure = globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(
+            m_node-&gt;indexingType());
+
+        if (!globalObject-&gt;isHavingABadTime() &amp;&amp; !hasAnyArrayStorage(m_node-&gt;indexingType())) {
+            unsigned numElements = m_node-&gt;numChildren();
+            
+            ArrayValues arrayValues = allocateJSArray(structure, numElements);
+            
+            for (unsigned operandIndex = 0; operandIndex &lt; m_node-&gt;numChildren(); ++operandIndex) {
+                Edge edge = m_graph.varArgChild(m_node, operandIndex);
+                
+                switch (m_node-&gt;indexingType()) {
+                case ALL_BLANK_INDEXING_TYPES:
+                case ALL_UNDECIDED_INDEXING_TYPES:
+                    DFG_CRASH(m_graph, m_node, &quot;Bad indexing type&quot;);
+                    break;
+                    
+                case ALL_DOUBLE_INDEXING_TYPES:
+                    m_out.storeDouble(
+                        lowDouble(edge),
+                        arrayValues.butterfly, m_heaps.indexedDoubleProperties[operandIndex]);
+                    break;
+                    
+                case ALL_INT32_INDEXING_TYPES:
+                case ALL_CONTIGUOUS_INDEXING_TYPES:
+                    m_out.store64(
+                        lowJSValue(edge, ManualOperandSpeculation),
+                        arrayValues.butterfly,
+                        m_heaps.forIndexingType(m_node-&gt;indexingType())-&gt;at(operandIndex));
+                    break;
+                    
+                default:
+                    DFG_CRASH(m_graph, m_node, &quot;Corrupt indexing type&quot;);
+                    break;
+                }
+            }
+            
+            setJSValue(arrayValues.array);
+            return;
+        }
+        
+        if (!m_node-&gt;numChildren()) {
+            setJSValue(vmCall(
+                m_out.int64, m_out.operation(operationNewEmptyArray), m_callFrame,
+                m_out.constIntPtr(structure)));
+            return;
+        }
+        
+        size_t scratchSize = sizeof(EncodedJSValue) * m_node-&gt;numChildren();
+        ASSERT(scratchSize);
+        ScratchBuffer* scratchBuffer = vm().scratchBufferForSize(scratchSize);
+        EncodedJSValue* buffer = static_cast&lt;EncodedJSValue*&gt;(scratchBuffer-&gt;dataBuffer());
+        
+        for (unsigned operandIndex = 0; operandIndex &lt; m_node-&gt;numChildren(); ++operandIndex) {
+            Edge edge = m_graph.varArgChild(m_node, operandIndex);
+            m_out.store64(
+                lowJSValue(edge, ManualOperandSpeculation),
+                m_out.absolute(buffer + operandIndex));
+        }
+        
+        m_out.storePtr(
+            m_out.constIntPtr(scratchSize), m_out.absolute(scratchBuffer-&gt;activeLengthPtr()));
+        
+        LValue result = vmCall(
+            m_out.int64, m_out.operation(operationNewArray), m_callFrame,
+            m_out.constIntPtr(structure), m_out.constIntPtr(buffer),
+            m_out.constIntPtr(m_node-&gt;numChildren()));
+        
+        m_out.storePtr(m_out.intPtrZero, m_out.absolute(scratchBuffer-&gt;activeLengthPtr()));
+        
+        setJSValue(result);
+    }
+    
+    void compileNewArrayBuffer()
+    {
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        Structure* structure = globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(
+            m_node-&gt;indexingType());
+        
+        if (!globalObject-&gt;isHavingABadTime() &amp;&amp; !hasAnyArrayStorage(m_node-&gt;indexingType())) {
+            unsigned numElements = m_node-&gt;numConstants();
+            
+            ArrayValues arrayValues = allocateJSArray(structure, numElements);
+            
+            JSValue* data = codeBlock()-&gt;constantBuffer(m_node-&gt;startConstant());
+            for (unsigned index = 0; index &lt; m_node-&gt;numConstants(); ++index) {
+                int64_t value;
+                if (hasDouble(m_node-&gt;indexingType()))
+                    value = bitwise_cast&lt;int64_t&gt;(data[index].asNumber());
+                else
+                    value = JSValue::encode(data[index]);
+                
+                m_out.store64(
+                    m_out.constInt64(value),
+                    arrayValues.butterfly,
+                    m_heaps.forIndexingType(m_node-&gt;indexingType())-&gt;at(index));
+            }
+            
+            setJSValue(arrayValues.array);
+            return;
+        }
+        
+        setJSValue(vmCall(
+            m_out.int64, m_out.operation(operationNewArrayBuffer), m_callFrame,
+            m_out.constIntPtr(structure), m_out.constIntPtr(m_node-&gt;startConstant()),
+            m_out.constIntPtr(m_node-&gt;numConstants())));
+    }
+    
+    void compileNewArrayWithSize()
+    {
+        LValue publicLength = lowInt32(m_node-&gt;child1());
+        
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        Structure* structure = globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(
+            m_node-&gt;indexingType());
+        
+        if (!globalObject-&gt;isHavingABadTime() &amp;&amp; !hasAnyArrayStorage(m_node-&gt;indexingType())) {
+            ASSERT(
+                hasUndecided(structure-&gt;indexingType())
+                || hasInt32(structure-&gt;indexingType())
+                || hasDouble(structure-&gt;indexingType())
+                || hasContiguous(structure-&gt;indexingType()));
+
+            LBasicBlock fastCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize fast case&quot;));
+            LBasicBlock largeCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize large case&quot;));
+            LBasicBlock failCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize fail case&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize continuation&quot;));
+            
+            m_out.branch(
+                m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)),
+                rarely(largeCase), usually(fastCase));
+
+            LBasicBlock lastNext = m_out.appendTo(fastCase, largeCase);
+            
+            // We don't round up to BASE_VECTOR_LEN for new Array(blah).
+            LValue vectorLength = publicLength;
+            
+            LValue payloadSize =
+                m_out.shl(m_out.zeroExt(vectorLength, m_out.intPtr), m_out.constIntPtr(3));
+            
+            LValue butterflySize = m_out.add(
+                payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
+            
+            LValue endOfStorage = allocateBasicStorageAndGetEnd(butterflySize, failCase);
+            
+            LValue butterfly = m_out.sub(endOfStorage, payloadSize);
+            
+            LValue object = allocateObject&lt;JSArray&gt;(structure, butterfly, failCase);
+            
+            m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
+            m_out.store32(vectorLength, butterfly, m_heaps.Butterfly_vectorLength);
+            
+            if (hasDouble(m_node-&gt;indexingType())) {
+                LBasicBlock initLoop = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize double init loop&quot;));
+                LBasicBlock initDone = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize double init done&quot;));
+                
+                ValueFromBlock originalIndex = m_out.anchor(vectorLength);
+                ValueFromBlock originalPointer = m_out.anchor(butterfly);
+                m_out.branch(
+                    m_out.notZero32(vectorLength), unsure(initLoop), unsure(initDone));
+                
+                LBasicBlock initLastNext = m_out.appendTo(initLoop, initDone);
+                LValue index = m_out.phi(m_out.int32, originalIndex);
+                LValue pointer = m_out.phi(m_out.intPtr, originalPointer);
+                
+                m_out.store64(
+                    m_out.constInt64(bitwise_cast&lt;int64_t&gt;(PNaN)),
+                    TypedPointer(m_heaps.indexedDoubleProperties.atAnyIndex(), pointer));
+                
+                LValue nextIndex = m_out.sub(index, m_out.int32One);
+                m_out.addIncomingToPhi(index, m_out.anchor(nextIndex));
+                m_out.addIncomingToPhi(pointer, m_out.anchor(m_out.add(pointer, m_out.intPtrEight)));
+                m_out.branch(
+                    m_out.notZero32(nextIndex), unsure(initLoop), unsure(initDone));
+                
+                m_out.appendTo(initDone, initLastNext);
+            }
+            
+            ValueFromBlock fastResult = m_out.anchor(object);
+            m_out.jump(continuation);
+            
+            m_out.appendTo(largeCase, failCase);
+            ValueFromBlock largeStructure = m_out.anchor(m_out.constIntPtr(
+                globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)));
+            m_out.jump(slowCase);
+            
+            m_out.appendTo(failCase, slowCase);
+            ValueFromBlock failStructure = m_out.anchor(m_out.constIntPtr(structure));
+            m_out.jump(slowCase);
+            
+            m_out.appendTo(slowCase, continuation);
+            LValue structureValue = m_out.phi(
+                m_out.intPtr, largeStructure, failStructure);
+            LValue slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationNewArrayWithSize, locations[0].directGPR(),
+                        locations[1].directGPR(), locations[2].directGPR());
+                },
+                structureValue, publicLength);
+            ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
+            return;
+        }
+        
+        LValue structureValue = m_out.select(
+            m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)),
+            m_out.constIntPtr(
+                globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)),
+            m_out.constIntPtr(structure));
+        setJSValue(vmCall(m_out.int64, m_out.operation(operationNewArrayWithSize), m_callFrame, structureValue, publicLength));
+    }
+
+    void compileNewTypedArray()
+    {
+        TypedArrayType type = m_node-&gt;typedArrayType();
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        
+        switch (m_node-&gt;child1().useKind()) {
+        case Int32Use: {
+            Structure* structure = globalObject-&gt;typedArrayStructure(type);
+
+            LValue size = lowInt32(m_node-&gt;child1());
+
+            LBasicBlock smallEnoughCase = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray small enough case&quot;));
+            LBasicBlock nonZeroCase = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray non-zero case&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray continuation&quot;));
+
+            m_out.branch(
+                m_out.above(size, m_out.constInt32(JSArrayBufferView::fastSizeLimit)),
+                rarely(slowCase), usually(smallEnoughCase));
+
+            LBasicBlock lastNext = m_out.appendTo(smallEnoughCase, nonZeroCase);
+
+            m_out.branch(m_out.notZero32(size), usually(nonZeroCase), rarely(slowCase));
+
+            m_out.appendTo(nonZeroCase, slowCase);
+
+            LValue byteSize =
+                m_out.shl(m_out.zeroExtPtr(size), m_out.constInt32(logElementSize(type)));
+            if (elementSize(type) &lt; 8) {
+                byteSize = m_out.bitAnd(
+                    m_out.add(byteSize, m_out.constIntPtr(7)),
+                    m_out.constIntPtr(~static_cast&lt;intptr_t&gt;(7)));
+            }
+        
+            LValue storage = allocateBasicStorage(byteSize, slowCase);
+
+            LValue fastResultValue =
+                allocateObject&lt;JSArrayBufferView&gt;(structure, m_out.intPtrZero, slowCase);
+
+            m_out.storePtr(storage, fastResultValue, m_heaps.JSArrayBufferView_vector);
+            m_out.store32(size, fastResultValue, m_heaps.JSArrayBufferView_length);
+            m_out.store32(m_out.constInt32(FastTypedArray), fastResultValue, m_heaps.JSArrayBufferView_mode);
+
+            ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+            m_out.jump(continuation);
+
+            m_out.appendTo(slowCase, continuation);
+
+            LValue slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationNewTypedArrayWithSizeForType(type), locations[0].directGPR(),
+                        CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR());
+                },
+                size);
+            ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
+            return;
+        }
+
+        case UntypedUse: {
+            LValue argument = lowJSValue(m_node-&gt;child1());
+
+            LValue result = vmCall(
+                m_out.intPtr, m_out.operation(operationNewTypedArrayWithOneArgumentForType(type)),
+                m_callFrame, weakPointer(globalObject-&gt;typedArrayStructure(type)), argument);
+
+            setJSValue(result);
+            return;
+        }
+
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            return;
+        }
+    }
+    
+    void compileAllocatePropertyStorage()
+    {
+        LValue object = lowCell(m_node-&gt;child1());
+        setStorage(allocatePropertyStorage(object, m_node-&gt;transition()-&gt;previous));
+    }
+
+    void compileReallocatePropertyStorage()
+    {
+        Transition* transition = m_node-&gt;transition();
+        LValue object = lowCell(m_node-&gt;child1());
+        LValue oldStorage = lowStorage(m_node-&gt;child2());
+        
+        setStorage(
+            reallocatePropertyStorage(
+                object, oldStorage, transition-&gt;previous, transition-&gt;next));
+    }
+    
+    void compileToStringOrCallStringConstructor()
+    {
+        switch (m_node-&gt;child1().useKind()) {
+        case StringObjectUse: {
+            LValue cell = lowCell(m_node-&gt;child1());
+            speculateStringObjectForCell(m_node-&gt;child1(), cell);
+            m_interpreter.filter(m_node-&gt;child1(), SpecStringObject);
+            
+            setJSValue(m_out.loadPtr(cell, m_heaps.JSWrapperObject_internalValue));
+            return;
+        }
+            
+        case StringOrStringObjectUse: {
+            LValue cell = lowCell(m_node-&gt;child1());
+            LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
+            
+            LBasicBlock notString = FTL_NEW_BLOCK(m_out, (&quot;ToString StringOrStringObject not string case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToString StringOrStringObject continuation&quot;));
+            
+            ValueFromBlock simpleResult = m_out.anchor(cell);
+            m_out.branch(
+                m_out.equal(structureID, m_out.constInt32(vm().stringStructure-&gt;id())),
+                unsure(continuation), unsure(notString));
+            
+            LBasicBlock lastNext = m_out.appendTo(notString, continuation);
+            speculateStringObjectForStructureID(m_node-&gt;child1(), structureID);
+            ValueFromBlock unboxedResult = m_out.anchor(
+                m_out.loadPtr(cell, m_heaps.JSWrapperObject_internalValue));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, simpleResult, unboxedResult));
+            
+            m_interpreter.filter(m_node-&gt;child1(), SpecString | SpecStringObject);
+            return;
+        }
+            
+        case CellUse:
+        case UntypedUse: {
+            LValue value;
+            if (m_node-&gt;child1().useKind() == CellUse)
+                value = lowCell(m_node-&gt;child1());
+            else
+                value = lowJSValue(m_node-&gt;child1());
+            
+            LBasicBlock isCell = FTL_NEW_BLOCK(m_out, (&quot;ToString CellUse/UntypedUse is cell&quot;));
+            LBasicBlock notString = FTL_NEW_BLOCK(m_out, (&quot;ToString CellUse/UntypedUse not string&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToString CellUse/UntypedUse continuation&quot;));
+            
+            LValue isCellPredicate;
+            if (m_node-&gt;child1().useKind() == CellUse)
+                isCellPredicate = m_out.booleanTrue;
+            else
+                isCellPredicate = this-&gt;isCell(value, provenType(m_node-&gt;child1()));
+            m_out.branch(isCellPredicate, unsure(isCell), unsure(notString));
+            
+            LBasicBlock lastNext = m_out.appendTo(isCell, notString);
+            ValueFromBlock simpleResult = m_out.anchor(value);
+            LValue isStringPredicate;
+            if (m_node-&gt;child1()-&gt;prediction() &amp; SpecString) {
+                isStringPredicate = isString(value, provenType(m_node-&gt;child1()));
+            } else
+                isStringPredicate = m_out.booleanFalse;
+            m_out.branch(isStringPredicate, unsure(continuation), unsure(notString));
+            
+            m_out.appendTo(notString, continuation);
+            LValue operation;
+            if (m_node-&gt;child1().useKind() == CellUse)
+                operation = m_out.operation(m_node-&gt;op() == ToString ? operationToStringOnCell : operationCallStringConstructorOnCell);
+            else
+                operation = m_out.operation(m_node-&gt;op() == ToString ? operationToString : operationCallStringConstructor);
+            ValueFromBlock convertedResult = m_out.anchor(vmCall(m_out.int64, operation, m_callFrame, value));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setJSValue(m_out.phi(m_out.int64, simpleResult, convertedResult));
+            return;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            break;
+        }
+    }
+    
+    void compileToPrimitive()
+    {
+        LValue value = lowJSValue(m_node-&gt;child1());
+        
+        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;ToPrimitive cell case&quot;));
+        LBasicBlock isObjectCase = FTL_NEW_BLOCK(m_out, (&quot;ToPrimitive object case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToPrimitive continuation&quot;));
+        
+        Vector&lt;ValueFromBlock, 3&gt; results;
+        
+        results.append(m_out.anchor(value));
+        m_out.branch(
+            isCell(value, provenType(m_node-&gt;child1())), unsure(isCellCase), unsure(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(isCellCase, isObjectCase);
+        results.append(m_out.anchor(value));
+        m_out.branch(
+            isObject(value, provenType(m_node-&gt;child1())),
+            unsure(isObjectCase), unsure(continuation));
+        
+        m_out.appendTo(isObjectCase, continuation);
+        results.append(m_out.anchor(vmCall(
+            m_out.int64, m_out.operation(operationToPrimitive), m_callFrame, value)));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, results));
+    }
+    
+    void compileMakeRope()
+    {
+        LValue kids[3];
+        unsigned numKids;
+        kids[0] = lowCell(m_node-&gt;child1());
+        kids[1] = lowCell(m_node-&gt;child2());
+        if (m_node-&gt;child3()) {
+            kids[2] = lowCell(m_node-&gt;child3());
+            numKids = 3;
+        } else {
+            kids[2] = 0;
+            numKids = 2;
+        }
+        
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;MakeRope slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MakeRope continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        MarkedAllocator&amp; allocator =
+            vm().heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+        
+        LValue result = allocateCell(
+            m_out.constIntPtr(&amp;allocator),
+            vm().stringStructure.get(),
+            slowPath);
+        
+        m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value);
+        for (unsigned i = 0; i &lt; numKids; ++i)
+            m_out.storePtr(kids[i], result, m_heaps.JSRopeString_fibers[i]);
+        for (unsigned i = numKids; i &lt; JSRopeString::s_maxInternalRopeLength; ++i)
+            m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSRopeString_fibers[i]);
+        LValue flags = m_out.load32(kids[0], m_heaps.JSString_flags);
+        LValue length = m_out.load32(kids[0], m_heaps.JSString_length);
+        for (unsigned i = 1; i &lt; numKids; ++i) {
+            flags = m_out.bitAnd(flags, m_out.load32(kids[i], m_heaps.JSString_flags));
+            CheckValue* lengthCheck = m_out.speculateAdd(
+                length, m_out.load32(kids[i], m_heaps.JSString_length));
+            blessSpeculation(lengthCheck, Uncountable, noValue(), nullptr, m_origin);
+            length = lengthCheck;
+        }
+        m_out.store32(
+            m_out.bitAnd(m_out.constInt32(JSString::Is8Bit), flags),
+            result, m_heaps.JSString_flags);
+        m_out.store32(length, result, m_heaps.JSString_length);
+        
+        ValueFromBlock fastResult = m_out.anchor(result);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+        LValue slowResultValue;
+        switch (numKids) {
+        case 2:
+            slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationMakeRope2, locations[0].directGPR(), locations[1].directGPR(),
+                        locations[2].directGPR());
+                }, kids[0], kids[1]);
+            break;
+        case 3:
+            slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationMakeRope3, locations[0].directGPR(), locations[1].directGPR(),
+                        locations[2].directGPR(), locations[3].directGPR());
+                }, kids[0], kids[1], kids[2]);
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad number of children&quot;);
+            break;
+        }
+        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
+    }
+    
+    void compileStringCharAt()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue index = lowInt32(m_node-&gt;child2());
+        LValue storage = lowStorage(m_node-&gt;child3());
+            
+        LBasicBlock fastPath = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String fast path&quot;));
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String continuation&quot;));
+            
+        m_out.branch(
+            m_out.aboveOrEqual(
+                index, m_out.load32NonNegative(base, m_heaps.JSString_length)),
+            rarely(slowPath), usually(fastPath));
+            
+        LBasicBlock lastNext = m_out.appendTo(fastPath, slowPath);
+            
+        LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value);
+            
+        LBasicBlock is8Bit = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String 8-bit case&quot;));
+        LBasicBlock is16Bit = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String 16-bit case&quot;));
+        LBasicBlock bitsContinuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String bitness continuation&quot;));
+        LBasicBlock bigCharacter = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String big character&quot;));
+            
+        m_out.branch(
+            m_out.testIsZero32(
+                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
+                m_out.constInt32(StringImpl::flagIs8Bit())),
+            unsure(is16Bit), unsure(is8Bit));
+            
+        m_out.appendTo(is8Bit, is16Bit);
+            
+        ValueFromBlock char8Bit = m_out.anchor(
+            m_out.load8ZeroExt32(m_out.baseIndex(
+                m_heaps.characters8, storage, m_out.zeroExtPtr(index),
+                provenValue(m_node-&gt;child2()))));
+        m_out.jump(bitsContinuation);
+            
+        m_out.appendTo(is16Bit, bigCharacter);
+
+        LValue char16BitValue = m_out.load16ZeroExt32(
+            m_out.baseIndex(
+                m_heaps.characters16, storage, m_out.zeroExtPtr(index),
+                provenValue(m_node-&gt;child2())));
+        ValueFromBlock char16Bit = m_out.anchor(char16BitValue);
+        m_out.branch(
+            m_out.aboveOrEqual(char16BitValue, m_out.constInt32(0x100)),
+            rarely(bigCharacter), usually(bitsContinuation));
+            
+        m_out.appendTo(bigCharacter, bitsContinuation);
+            
+        Vector&lt;ValueFromBlock, 4&gt; results;
+        results.append(m_out.anchor(vmCall(
+            m_out.int64, m_out.operation(operationSingleCharacterString),
+            m_callFrame, char16BitValue)));
+        m_out.jump(continuation);
+            
+        m_out.appendTo(bitsContinuation, slowPath);
+            
+        LValue character = m_out.phi(m_out.int32, char8Bit, char16Bit);
+            
+        LValue smallStrings = m_out.constIntPtr(vm().smallStrings.singleCharacterStrings());
+            
+        results.append(m_out.anchor(m_out.loadPtr(m_out.baseIndex(
+            m_heaps.singleCharacterStrings, smallStrings, m_out.zeroExtPtr(character)))));
+        m_out.jump(continuation);
+            
+        m_out.appendTo(slowPath, continuation);
+            
+        if (m_node-&gt;arrayMode().isInBounds()) {
+            speculate(OutOfBounds, noValue(), 0, m_out.booleanTrue);
+            results.append(m_out.anchor(m_out.intPtrZero));
+        } else {
+            JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+                
+            if (globalObject-&gt;stringPrototypeChainIsSane()) {
+                // FIXME: This could be captured using a Speculation mode that means
+                // &quot;out-of-bounds loads return a trivial value&quot;, something like
+                // SaneChainOutOfBounds.
+                // https://bugs.webkit.org/show_bug.cgi?id=144668
+                
+                m_graph.watchpoints().addLazily(globalObject-&gt;stringPrototype()-&gt;structure()-&gt;transitionWatchpointSet());
+                m_graph.watchpoints().addLazily(globalObject-&gt;objectPrototype()-&gt;structure()-&gt;transitionWatchpointSet());
+                
+                LBasicBlock negativeIndex = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String negative index&quot;));
+                    
+                results.append(m_out.anchor(m_out.constInt64(JSValue::encode(jsUndefined()))));
+                m_out.branch(
+                    m_out.lessThan(index, m_out.int32Zero),
+                    rarely(negativeIndex), usually(continuation));
+                    
+                m_out.appendTo(negativeIndex, continuation);
+            }
+                
+            results.append(m_out.anchor(vmCall(
+                m_out.int64, m_out.operation(operationGetByValStringInt), m_callFrame, base, index)));
+        }
+            
+        m_out.jump(continuation);
+            
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, results));
+    }
+    
+    void compileStringCharCodeAt()
+    {
+        LBasicBlock is8Bit = FTL_NEW_BLOCK(m_out, (&quot;StringCharCodeAt 8-bit case&quot;));
+        LBasicBlock is16Bit = FTL_NEW_BLOCK(m_out, (&quot;StringCharCodeAt 16-bit case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;StringCharCodeAt continuation&quot;));
+
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue index = lowInt32(m_node-&gt;child2());
+        LValue storage = lowStorage(m_node-&gt;child3());
+        
+        speculate(
+            Uncountable, noValue(), 0,
+            m_out.aboveOrEqual(
+                index, m_out.load32NonNegative(base, m_heaps.JSString_length)));
+        
+        LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value);
+        
+        m_out.branch(
+            m_out.testIsZero32(
+                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
+                m_out.constInt32(StringImpl::flagIs8Bit())),
+            unsure(is16Bit), unsure(is8Bit));
+            
+        LBasicBlock lastNext = m_out.appendTo(is8Bit, is16Bit);
+            
+        ValueFromBlock char8Bit = m_out.anchor(
+            m_out.load8ZeroExt32(m_out.baseIndex(
+                m_heaps.characters8, storage, m_out.zeroExtPtr(index),
+                provenValue(m_node-&gt;child2()))));
+        m_out.jump(continuation);
+            
+        m_out.appendTo(is16Bit, continuation);
+            
+        ValueFromBlock char16Bit = m_out.anchor(
+            m_out.load16ZeroExt32(m_out.baseIndex(
+                m_heaps.characters16, storage, m_out.zeroExtPtr(index),
+                provenValue(m_node-&gt;child2()))));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        
+        setInt32(m_out.phi(m_out.int32, char8Bit, char16Bit));
+    }
+
+    void compileStringFromCharCode()
+    {
+        Edge childEdge = m_node-&gt;child1();
+        
+        if (childEdge.useKind() == UntypedUse) {
+            LValue result = vmCall(
+                m_out.int64, m_out.operation(operationStringFromCharCodeUntyped), m_callFrame,
+                lowJSValue(childEdge));
+            setJSValue(result);
+            return;
+        }
+
+        DFG_ASSERT(m_graph, m_node, childEdge.useKind() == Int32Use);
+
+        LValue value = lowInt32(childEdge);
+        
+        LBasicBlock smallIntCase = FTL_NEW_BLOCK(m_out, (&quot;StringFromCharCode small int case&quot;));
+        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;StringFromCharCode slow case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;StringFromCharCode continuation&quot;));
+
+        m_out.branch(
+            m_out.aboveOrEqual(value, m_out.constInt32(0xff)),
+            rarely(slowCase), usually(smallIntCase));
+
+        LBasicBlock lastNext = m_out.appendTo(smallIntCase, slowCase);
+
+        LValue smallStrings = m_out.constIntPtr(vm().smallStrings.singleCharacterStrings());
+        LValue fastResultValue = m_out.loadPtr(
+            m_out.baseIndex(m_heaps.singleCharacterStrings, smallStrings, m_out.zeroExtPtr(value)));
+        ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+        m_out.jump(continuation);
+
+        m_out.appendTo(slowCase, continuation);
+
+        LValue slowResultValue = vmCall(
+            m_out.intPtr, m_out.operation(operationStringFromCharCode), m_callFrame, value);
+        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+
+        setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
+    }
+    
+    void compileGetByOffset()
+    {
+        StorageAccessData&amp; data = m_node-&gt;storageAccessData();
+        
+        setJSValue(loadProperty(
+            lowStorage(m_node-&gt;child1()), data.identifierNumber, data.offset));
+    }
+    
+    void compileGetGetter()
+    {
+        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.GetterSetter_getter));
+    }
+    
+    void compileGetSetter()
+    {
+        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.GetterSetter_setter));
+    }
+    
+    void compileMultiGetByOffset()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        
+        MultiGetByOffsetData&amp; data = m_node-&gt;multiGetByOffsetData();
+
+        if (data.cases.isEmpty()) {
+            // Protect against creating a Phi function with zero inputs. LLVM didn't like that.
+            // It's not clear if this is needed anymore.
+            // FIXME: https://bugs.webkit.org/show_bug.cgi?id=154382
+            terminate(BadCache);
+            return;
+        }
+        
+        Vector&lt;LBasicBlock, 2&gt; blocks(data.cases.size());
+        for (unsigned i = data.cases.size(); i--;)
+            blocks[i] = FTL_NEW_BLOCK(m_out, (&quot;MultiGetByOffset case &quot;, i));
+        LBasicBlock exit = FTL_NEW_BLOCK(m_out, (&quot;MultiGetByOffset fail&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MultiGetByOffset continuation&quot;));
+        
+        Vector&lt;SwitchCase, 2&gt; cases;
+        StructureSet baseSet;
+        for (unsigned i = data.cases.size(); i--;) {
+            MultiGetByOffsetCase getCase = data.cases[i];
+            for (unsigned j = getCase.set().size(); j--;) {
+                Structure* structure = getCase.set()[j];
+                baseSet.add(structure);
+                cases.append(SwitchCase(weakStructureID(structure), blocks[i], Weight(1)));
+            }
+        }
+        m_out.switchInstruction(
+            m_out.load32(base, m_heaps.JSCell_structureID), cases, exit, Weight(0));
+        
+        LBasicBlock lastNext = m_out.m_nextBlock;
+        
+        Vector&lt;ValueFromBlock, 2&gt; results;
+        for (unsigned i = data.cases.size(); i--;) {
+            MultiGetByOffsetCase getCase = data.cases[i];
+            GetByOffsetMethod method = getCase.method();
+            
+            m_out.appendTo(blocks[i], i + 1 &lt; data.cases.size() ? blocks[i + 1] : exit);
+            
+            LValue result;
+            
+            switch (method.kind()) {
+            case GetByOffsetMethod::Invalid:
+                RELEASE_ASSERT_NOT_REACHED();
+                break;
+                
+            case GetByOffsetMethod::Constant:
+                result = m_out.constInt64(JSValue::encode(method.constant()-&gt;value()));
+                break;
+                
+            case GetByOffsetMethod::Load:
+            case GetByOffsetMethod::LoadFromPrototype: {
+                LValue propertyBase;
+                if (method.kind() == GetByOffsetMethod::Load)
+                    propertyBase = base;
+                else
+                    propertyBase = weakPointer(method.prototype()-&gt;value().asCell());
+                if (!isInlineOffset(method.offset()))
+                    propertyBase = loadButterflyReadOnly(propertyBase);
+                result = loadProperty(
+                    propertyBase, data.identifierNumber, method.offset());
+                break;
+            } }
+            
+            results.append(m_out.anchor(result));
+            m_out.jump(continuation);
+        }
+        
+        m_out.appendTo(exit, continuation);
+        if (!m_interpreter.forNode(m_node-&gt;child1()).m_structure.isSubsetOf(baseSet))
+            speculate(BadCache, noValue(), nullptr, m_out.booleanTrue);
+        m_out.unreachable();
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, results));
+    }
+    
+    void compilePutByOffset()
+    {
+        StorageAccessData&amp; data = m_node-&gt;storageAccessData();
+        
+        storeProperty(
+            lowJSValue(m_node-&gt;child3()),
+            lowStorage(m_node-&gt;child1()), data.identifierNumber, data.offset);
+    }
+    
+    void compileMultiPutByOffset()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        LValue value = lowJSValue(m_node-&gt;child2());
+        
+        MultiPutByOffsetData&amp; data = m_node-&gt;multiPutByOffsetData();
+        
+        Vector&lt;LBasicBlock, 2&gt; blocks(data.variants.size());
+        for (unsigned i = data.variants.size(); i--;)
+            blocks[i] = FTL_NEW_BLOCK(m_out, (&quot;MultiPutByOffset case &quot;, i));
+        LBasicBlock exit = FTL_NEW_BLOCK(m_out, (&quot;MultiPutByOffset fail&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MultiPutByOffset continuation&quot;));
+        
+        Vector&lt;SwitchCase, 2&gt; cases;
+        StructureSet baseSet;
+        for (unsigned i = data.variants.size(); i--;) {
+            PutByIdVariant variant = data.variants[i];
+            for (unsigned j = variant.oldStructure().size(); j--;) {
+                Structure* structure = variant.oldStructure()[j];
+                baseSet.add(structure);
+                cases.append(SwitchCase(weakStructureID(structure), blocks[i], Weight(1)));
+            }
+        }
+        m_out.switchInstruction(
+            m_out.load32(base, m_heaps.JSCell_structureID), cases, exit, Weight(0));
+        
+        LBasicBlock lastNext = m_out.m_nextBlock;
+        
+        for (unsigned i = data.variants.size(); i--;) {
+            m_out.appendTo(blocks[i], i + 1 &lt; data.variants.size() ? blocks[i + 1] : exit);
+            
+            PutByIdVariant variant = data.variants[i];
+
+            checkInferredType(m_node-&gt;child2(), value, variant.requiredType());
+            
+            LValue storage;
+            if (variant.kind() == PutByIdVariant::Replace) {
+                if (isInlineOffset(variant.offset()))
+                    storage = base;
+                else
+                    storage = loadButterflyWithBarrier(base);
+            } else {
+                m_graph.m_plan.transitions.addLazily(
+                    codeBlock(), m_node-&gt;origin.semantic.codeOriginOwner(),
+                    variant.oldStructureForTransition(), variant.newStructure());
+                
+                storage = storageForTransition(
+                    base, variant.offset(),
+                    variant.oldStructureForTransition(), variant.newStructure());
+
+                ASSERT(variant.oldStructureForTransition()-&gt;indexingType() == variant.newStructure()-&gt;indexingType());
+                ASSERT(variant.oldStructureForTransition()-&gt;typeInfo().inlineTypeFlags() == variant.newStructure()-&gt;typeInfo().inlineTypeFlags());
+                ASSERT(variant.oldStructureForTransition()-&gt;typeInfo().type() == variant.newStructure()-&gt;typeInfo().type());
+                m_out.store32(
+                    weakStructureID(variant.newStructure()), base, m_heaps.JSCell_structureID);
+            }
+            
+            storeProperty(value, storage, data.identifierNumber, variant.offset());
+            m_out.jump(continuation);
+        }
+        
+        m_out.appendTo(exit, continuation);
+        if (!m_interpreter.forNode(m_node-&gt;child1()).m_structure.isSubsetOf(baseSet))
+            speculate(BadCache, noValue(), nullptr, m_out.booleanTrue);
+        m_out.unreachable();
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void compileGetGlobalVariable()
+    {
+        setJSValue(m_out.load64(m_out.absolute(m_node-&gt;variablePointer())));
+    }
+    
+    void compilePutGlobalVariable()
+    {
+        m_out.store64(
+            lowJSValue(m_node-&gt;child2()), m_out.absolute(m_node-&gt;variablePointer()));
+    }
+    
+    void compileNotifyWrite()
+    {
+        WatchpointSet* set = m_node-&gt;watchpointSet();
+        
+        LBasicBlock isNotInvalidated = FTL_NEW_BLOCK(m_out, (&quot;NotifyWrite not invalidated case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NotifyWrite continuation&quot;));
+        
+        LValue state = m_out.load8ZeroExt32(m_out.absolute(set-&gt;addressOfState()));
+        m_out.branch(
+            m_out.equal(state, m_out.constInt32(IsInvalidated)),
+            usually(continuation), rarely(isNotInvalidated));
+        
+        LBasicBlock lastNext = m_out.appendTo(isNotInvalidated, continuation);
+
+        lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp;) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationNotifyWrite, InvalidGPRReg, CCallHelpers::TrustedImmPtr(set));
+            });
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void compileGetCallee()
+    {
+        setJSValue(m_out.loadPtr(addressFor(JSStack::Callee)));
+    }
+    
+    void compileGetArgumentCount()
+    {
+        setInt32(m_out.load32(payloadFor(JSStack::ArgumentCount)));
+    }
+    
+    void compileGetScope()
+    {
+        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.JSFunction_scope));
+    }
+    
+    void compileSkipScope()
+    {
+        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.JSScope_next));
+    }
+    
+    void compileGetClosureVar()
+    {
+        setJSValue(
+            m_out.load64(
+                lowCell(m_node-&gt;child1()),
+                m_heaps.JSEnvironmentRecord_variables[m_node-&gt;scopeOffset().offset()]));
+    }
+    
+    void compilePutClosureVar()
+    {
+        m_out.store64(
+            lowJSValue(m_node-&gt;child2()),
+            lowCell(m_node-&gt;child1()),
+            m_heaps.JSEnvironmentRecord_variables[m_node-&gt;scopeOffset().offset()]);
+    }
+    
+    void compileGetFromArguments()
+    {
+        setJSValue(
+            m_out.load64(
+                lowCell(m_node-&gt;child1()),
+                m_heaps.DirectArguments_storage[m_node-&gt;capturedArgumentsOffset().offset()]));
+    }
+    
+    void compilePutToArguments()
+    {
+        m_out.store64(
+            lowJSValue(m_node-&gt;child2()),
+            lowCell(m_node-&gt;child1()),
+            m_heaps.DirectArguments_storage[m_node-&gt;capturedArgumentsOffset().offset()]);
+    }
+    
+    void compileCompareEq()
+    {
+        if (m_node-&gt;isBinaryUseKind(Int32Use)
+            || m_node-&gt;isBinaryUseKind(Int52RepUse)
+            || m_node-&gt;isBinaryUseKind(DoubleRepUse)
+            || m_node-&gt;isBinaryUseKind(ObjectUse)
+            || m_node-&gt;isBinaryUseKind(BooleanUse)
+            || m_node-&gt;isBinaryUseKind(SymbolUse)
+            || m_node-&gt;isBinaryUseKind(StringIdentUse)
+            || m_node-&gt;isBinaryUseKind(StringUse)) {
+            compileCompareStrictEq();
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(ObjectUse, ObjectOrOtherUse)) {
+            compareEqObjectOrOtherToObject(m_node-&gt;child2(), m_node-&gt;child1());
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(ObjectOrOtherUse, ObjectUse)) {
+            compareEqObjectOrOtherToObject(m_node-&gt;child1(), m_node-&gt;child2());
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            nonSpeculativeCompare(
+                [&amp;] (LValue left, LValue right) {
+                    return m_out.equal(left, right);
+                },
+                operationCompareEq);
+            return;
+        }
+
+        if (m_node-&gt;child1().useKind() == OtherUse) {
+            ASSERT(!m_interpreter.needsTypeCheck(m_node-&gt;child1(), SpecOther));
+            setBoolean(equalNullOrUndefined(m_node-&gt;child2(), AllCellsAreFalse, EqualNullOrUndefined, ManualOperandSpeculation));
+            return;
+        }
+
+        if (m_node-&gt;child2().useKind() == OtherUse) {
+            ASSERT(!m_interpreter.needsTypeCheck(m_node-&gt;child2(), SpecOther));
+            setBoolean(equalNullOrUndefined(m_node-&gt;child1(), AllCellsAreFalse, EqualNullOrUndefined, ManualOperandSpeculation));
+            return;
+        }
+
+        DFG_CRASH(m_graph, m_node, &quot;Bad use kinds&quot;);
+    }
+    
+    void compileCompareStrictEq()
+    {
+        if (m_node-&gt;isBinaryUseKind(Int32Use)) {
+            setBoolean(
+                m_out.equal(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(Int52RepUse)) {
+            Int52Kind kind;
+            LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
+            LValue right = lowInt52(m_node-&gt;child2(), kind);
+            setBoolean(m_out.equal(left, right));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(DoubleRepUse)) {
+            setBoolean(
+                m_out.doubleEqual(lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(StringIdentUse)) {
+            setBoolean(
+                m_out.equal(lowStringIdent(m_node-&gt;child1()), lowStringIdent(m_node-&gt;child2())));
+            return;
+        }
+
+        if (m_node-&gt;isBinaryUseKind(StringUse)) {
+            LValue left = lowCell(m_node-&gt;child1());
+            LValue right = lowCell(m_node-&gt;child2());
+
+            LBasicBlock notTriviallyEqualCase = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq/String not trivially equal case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq/String continuation&quot;));
+
+            speculateString(m_node-&gt;child1(), left);
+
+            ValueFromBlock fastResult = m_out.anchor(m_out.booleanTrue);
+            m_out.branch(
+                m_out.equal(left, right), unsure(continuation), unsure(notTriviallyEqualCase));
+
+            LBasicBlock lastNext = m_out.appendTo(notTriviallyEqualCase, continuation);
+
+            speculateString(m_node-&gt;child2(), right);
+            
+            ValueFromBlock slowResult = m_out.anchor(stringsEqual(left, right));
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            setBoolean(m_out.phi(m_out.boolean, fastResult, slowResult));
+            return;
+        }
+
+        if (m_node-&gt;isBinaryUseKind(ObjectUse, UntypedUse)) {
+            setBoolean(
+                m_out.equal(
+                    lowNonNullObject(m_node-&gt;child1()),
+                    lowJSValue(m_node-&gt;child2())));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(UntypedUse, ObjectUse)) {
+            setBoolean(
+                m_out.equal(
+                    lowNonNullObject(m_node-&gt;child2()),
+                    lowJSValue(m_node-&gt;child1())));
+            return;
+        }
+
+        if (m_node-&gt;isBinaryUseKind(ObjectUse)) {
+            setBoolean(
+                m_out.equal(
+                    lowNonNullObject(m_node-&gt;child1()),
+                    lowNonNullObject(m_node-&gt;child2())));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(BooleanUse)) {
+            setBoolean(
+                m_out.equal(lowBoolean(m_node-&gt;child1()), lowBoolean(m_node-&gt;child2())));
+            return;
+        }
+
+        if (m_node-&gt;isBinaryUseKind(SymbolUse)) {
+            LValue left = lowSymbol(m_node-&gt;child1());
+            LValue right = lowSymbol(m_node-&gt;child2());
+            LValue leftStringImpl = m_out.loadPtr(left, m_heaps.Symbol_privateName);
+            LValue rightStringImpl = m_out.loadPtr(right, m_heaps.Symbol_privateName);
+            setBoolean(m_out.equal(leftStringImpl, rightStringImpl));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(MiscUse, UntypedUse)
+            || m_node-&gt;isBinaryUseKind(UntypedUse, MiscUse)) {
+            speculate(m_node-&gt;child1());
+            speculate(m_node-&gt;child2());
+            LValue left = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
+            LValue right = lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation);
+            setBoolean(m_out.equal(left, right));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(StringIdentUse, NotStringVarUse)
+            || m_node-&gt;isBinaryUseKind(NotStringVarUse, StringIdentUse)) {
+            Edge leftEdge = m_node-&gt;childFor(StringIdentUse);
+            Edge rightEdge = m_node-&gt;childFor(NotStringVarUse);
+            
+            LValue left = lowStringIdent(leftEdge);
+            LValue rightValue = lowJSValue(rightEdge, ManualOperandSpeculation);
+            
+            LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq StringIdent to NotStringVar is cell case&quot;));
+            LBasicBlock isStringCase = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq StringIdent to NotStringVar is string case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq StringIdent to NotStringVar continuation&quot;));
+            
+            ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
+            m_out.branch(
+                isCell(rightValue, provenType(rightEdge)),
+                unsure(isCellCase), unsure(continuation));
+            
+            LBasicBlock lastNext = m_out.appendTo(isCellCase, isStringCase);
+            ValueFromBlock notStringResult = m_out.anchor(m_out.booleanFalse);
+            m_out.branch(
+                isString(rightValue, provenType(rightEdge)),
+                unsure(isStringCase), unsure(continuation));
+            
+            m_out.appendTo(isStringCase, continuation);
+            LValue right = m_out.loadPtr(rightValue, m_heaps.JSString_value);
+            speculateStringIdent(rightEdge, rightValue, right);
+            ValueFromBlock isStringResult = m_out.anchor(m_out.equal(left, right));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setBoolean(m_out.phi(m_out.boolean, notCellResult, notStringResult, isStringResult));
+            return;
+        }
+        
+        DFG_CRASH(m_graph, m_node, &quot;Bad use kinds&quot;);
+    }
+    
+    void compileCompareStrictEqConstant()
+    {
+        JSValue constant = m_node-&gt;child2()-&gt;asJSValue();
+
+        setBoolean(
+            m_out.equal(
+                lowJSValue(m_node-&gt;child1()),
+                m_out.constInt64(JSValue::encode(constant))));
+    }
+    
+    void compileCompareLess()
+    {
+        compare(
+            [&amp;] (LValue left, LValue right) {
+                return m_out.lessThan(left, right);
+            },
+            [&amp;] (LValue left, LValue right) {
+                return m_out.doubleLessThan(left, right);
+            },
+            operationCompareLess);
+    }
+    
+    void compileCompareLessEq()
+    {
+        compare(
+            [&amp;] (LValue left, LValue right) {
+                return m_out.lessThanOrEqual(left, right);
+            },
+            [&amp;] (LValue left, LValue right) {
+                return m_out.doubleLessThanOrEqual(left, right);
+            },
+            operationCompareLessEq);
+    }
+    
+    void compileCompareGreater()
+    {
+        compare(
+            [&amp;] (LValue left, LValue right) {
+                return m_out.greaterThan(left, right);
+            },
+            [&amp;] (LValue left, LValue right) {
+                return m_out.doubleGreaterThan(left, right);
+            },
+            operationCompareGreater);
+    }
+    
+    void compileCompareGreaterEq()
+    {
+        compare(
+            [&amp;] (LValue left, LValue right) {
+                return m_out.greaterThanOrEqual(left, right);
+            },
+            [&amp;] (LValue left, LValue right) {
+                return m_out.doubleGreaterThanOrEqual(left, right);
+            },
+            operationCompareGreaterEq);
+    }
+    
+    void compileLogicalNot()
+    {
+        setBoolean(m_out.logicalNot(boolify(m_node-&gt;child1())));
+    }
+
+    void compileCallOrConstruct()
+    {
+        Node* node = m_node;
+        unsigned numArgs = node-&gt;numChildren() - 1;
+
+        LValue jsCallee = lowJSValue(m_graph.varArgChild(node, 0));
+
+        unsigned frameSize = JSStack::CallFrameHeaderSize + numArgs;
+        unsigned alignedFrameSize = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), frameSize);
+
+        // JS-&gt;JS calling convention requires that the caller allows this much space on top of stack to
+        // get trashed by the callee, even if not all of that space is used to pass arguments. We tell
+        // B3 this explicitly for two reasons:
+        //
+        // - We will only pass frameSize worth of stuff.
+        // - The trashed stack guarantee is logically separate from the act of passing arguments, so we
+        //   shouldn't rely on Air to infer the trashed stack property based on the arguments it ends
+        //   up seeing.
+        m_proc.requestCallArgAreaSize(alignedFrameSize);
+
+        // Collect the arguments, since this can generate code and we want to generate it before we emit
+        // the call.
+        Vector&lt;ConstrainedValue&gt; arguments;
+
+        // Make sure that the callee goes into GPR0 because that's where the slow path thunks expect the
+        // callee to be.
+        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
+
+        auto addArgument = [&amp;] (LValue value, VirtualRegister reg, int offset) {
+            intptr_t offsetFromSP =
+                (reg.offset() - JSStack::CallerFrameAndPCSize) * sizeof(EncodedJSValue) + offset;
+            arguments.append(ConstrainedValue(value, ValueRep::stackArgument(offsetFromSP)));
+        };
+
+        addArgument(jsCallee, VirtualRegister(JSStack::Callee), 0);
+        addArgument(m_out.constInt32(numArgs), VirtualRegister(JSStack::ArgumentCount), PayloadOffset);
+        for (unsigned i = 0; i &lt; numArgs; ++i)
+            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+
+        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+        patchpoint-&gt;appendVector(arguments);
+
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+        
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+        patchpoint-&gt;clobberLate(RegisterSet::volatileRegistersForJSCall());
+        patchpoint-&gt;resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+
+        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                CallSiteIndex callSiteIndex = state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(codeOrigin);
+
+                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
+
+                jit.store32(
+                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
+                    CCallHelpers::tagFor(VirtualRegister(JSStack::ArgumentCount)));
+
+                CallLinkInfo* callLinkInfo = jit.codeBlock()-&gt;addCallLinkInfo();
+
+                CCallHelpers::DataLabelPtr targetToCheck;
+                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
+                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
+                    CCallHelpers::TrustedImmPtr(0));
+
+                CCallHelpers::Call fastCall = jit.nearCall();
+                CCallHelpers::Jump done = jit.jump();
+
+                slowPath.link(&amp;jit);
+
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                CCallHelpers::Call slowCall = jit.nearCall();
+                done.link(&amp;jit);
+
+                callLinkInfo-&gt;setUpCall(
+                    node-&gt;op() == Construct ? CallLinkInfo::Construct : CallLinkInfo::Call,
+                    node-&gt;origin.semantic, GPRInfo::regT0);
+
+                jit.addPtr(
+                    CCallHelpers::TrustedImm32(-params.proc().frameSize()),
+                    GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
+
+                jit.addLinkTask(
+                    [=] (LinkBuffer&amp; linkBuffer) {
+                        MacroAssemblerCodePtr linkCall =
+                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
+
+                        callLinkInfo-&gt;setCallLocations(
+                            linkBuffer.locationOfNearCall(slowCall),
+                            linkBuffer.locationOf(targetToCheck),
+                            linkBuffer.locationOfNearCall(fastCall));
+                    });
+            });
+
+        setJSValue(patchpoint);
+    }
+
+    void compileTailCall()
+    {
+        Node* node = m_node;
+        unsigned numArgs = node-&gt;numChildren() - 1;
+
+        LValue jsCallee = lowJSValue(m_graph.varArgChild(node, 0));
+        
+        // We want B3 to give us all of the arguments using whatever mechanism it thinks is
+        // convenient. The generator then shuffles those arguments into our own call frame,
+        // destroying our frame in the process.
+
+        // Note that we don't have to do anything special for exceptions. A tail call is only a
+        // tail call if it is not inside a try block.
+
+        Vector&lt;ConstrainedValue&gt; arguments;
+
+        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
+
+        for (unsigned i = 0; i &lt; numArgs; ++i) {
+            // Note: we could let the shuffler do boxing for us, but it's not super clear that this
+            // would be better. Also, if we wanted to do that, then we'd have to teach the shuffler
+            // that 32-bit values could land at 4-byte alignment but not 8-byte alignment.
+            
+            ConstrainedValue constrainedValue(
+                lowJSValue(m_graph.varArgChild(node, 1 + i)),
+                ValueRep::WarmAny);
+            arguments.append(constrainedValue);
+        }
+
+        PatchpointValue* patchpoint = m_out.patchpoint(Void);
+        patchpoint-&gt;appendVector(arguments);
+
+        // Prevent any of the arguments from using the scratch register.
+        patchpoint-&gt;clobberEarly(RegisterSet::macroScratchRegisters());
+
+        // We don't have to tell the patchpoint that we will clobber registers, since we won't return
+        // anyway.
+
+        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                CallSiteIndex callSiteIndex = state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(codeOrigin);
+
+                CallFrameShuffleData shuffleData;
+                shuffleData.numLocals = state-&gt;jitCode-&gt;common.frameRegisterCount;
+                shuffleData.callee = ValueRecovery::inGPR(GPRInfo::regT0, DataFormatJS);
+
+                for (unsigned i = 0; i &lt; numArgs; ++i)
+                    shuffleData.args.append(params[1 + i].recoveryForJSValue());
+
+                shuffleData.setupCalleeSaveRegisters(jit.codeBlock());
+
+                CallLinkInfo* callLinkInfo = jit.codeBlock()-&gt;addCallLinkInfo();
+
+                CCallHelpers::DataLabelPtr targetToCheck;
+                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
+                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
+                    CCallHelpers::TrustedImmPtr(0));
+
+                callLinkInfo-&gt;setFrameShuffleData(shuffleData);
+                CallFrameShuffler(jit, shuffleData).prepareForTailCall();
+
+                CCallHelpers::Call fastCall = jit.nearTailCall();
+
+                slowPath.link(&amp;jit);
+
+                // Yes, this is really necessary. You could throw an exception in a host call on the
+                // slow path. That'll route us to lookupExceptionHandler(), which unwinds starting
+                // with the call site index of our frame. Bad things happen if it's not set.
+                jit.store32(
+                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
+                    CCallHelpers::tagFor(VirtualRegister(JSStack::ArgumentCount)));
+
+                CallFrameShuffler slowPathShuffler(jit, shuffleData);
+                slowPathShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
+                slowPathShuffler.prepareForSlowPath();
+
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                CCallHelpers::Call slowCall = jit.nearCall();
+
+                jit.abortWithReason(JITDidReturnFromTailCall);
+
+                callLinkInfo-&gt;setUpCall(CallLinkInfo::TailCall, codeOrigin, GPRInfo::regT0);
+
+                jit.addLinkTask(
+                    [=] (LinkBuffer&amp; linkBuffer) {
+                        MacroAssemblerCodePtr linkCall =
+                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
+
+                        callLinkInfo-&gt;setCallLocations(
+                            linkBuffer.locationOfNearCall(slowCall),
+                            linkBuffer.locationOf(targetToCheck),
+                            linkBuffer.locationOfNearCall(fastCall));
+                    });
+            });
+        
+        m_out.unreachable();
+    }
+    
+    void compileCallOrConstructVarargs()
+    {
+        Node* node = m_node;
+        LValue jsCallee = lowJSValue(m_node-&gt;child1());
+        LValue thisArg = lowJSValue(m_node-&gt;child3());
+        
+        LValue jsArguments = nullptr;
+        bool forwarding = false;
+        
+        switch (node-&gt;op()) {
+        case CallVarargs:
+        case TailCallVarargs:
+        case TailCallVarargsInlinedCaller:
+        case ConstructVarargs:
+            jsArguments = lowJSValue(node-&gt;child2());
+            break;
+        case CallForwardVarargs:
+        case TailCallForwardVarargs:
+        case TailCallForwardVarargsInlinedCaller:
+        case ConstructForwardVarargs:
+            forwarding = true;
+            break;
+        default:
+            DFG_CRASH(m_graph, node, &quot;bad node type&quot;);
+            break;
+        }
+        
+        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+
+        // Append the forms of the arguments that we will use before any clobbering happens.
+        patchpoint-&gt;append(jsCallee, ValueRep::reg(GPRInfo::regT0));
+        if (jsArguments)
+            patchpoint-&gt;appendSomeRegister(jsArguments);
+        patchpoint-&gt;appendSomeRegister(thisArg);
+
+        if (!forwarding) {
+            // Now append them again for after clobbering. Note that the compiler may ask us to use a
+            // different register for the late for the post-clobbering version of the value. This gives
+            // the compiler a chance to spill these values without having to burn any callee-saves.
+            patchpoint-&gt;append(jsCallee, ValueRep::LateColdAny);
+            patchpoint-&gt;append(jsArguments, ValueRep::LateColdAny);
+            patchpoint-&gt;append(thisArg, ValueRep::LateColdAny);
+        }
+
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+        
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+        patchpoint-&gt;clobberLate(RegisterSet::volatileRegistersForJSCall());
+        patchpoint-&gt;resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+
+        // This is the minimum amount of call arg area stack space that all JS-&gt;JS calls always have.
+        unsigned minimumJSCallAreaSize =
+            sizeof(CallerFrameAndPC) +
+            WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(EncodedJSValue));
+
+        m_proc.requestCallArgAreaSize(minimumJSCallAreaSize);
+        
+        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                CallSiteIndex callSiteIndex =
+                    state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(codeOrigin);
+
+                Box&lt;CCallHelpers::JumpList&gt; exceptions =
+                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
+
+                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
+
+                jit.store32(
+                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
+                    CCallHelpers::tagFor(VirtualRegister(JSStack::ArgumentCount)));
+
+                CallLinkInfo* callLinkInfo = jit.codeBlock()-&gt;addCallLinkInfo();
+                CallVarargsData* data = node-&gt;callVarargsData();
+
+                unsigned argIndex = 1;
+                GPRReg calleeGPR = params[argIndex++].gpr();
+                ASSERT(calleeGPR == GPRInfo::regT0);
+                GPRReg argumentsGPR = jsArguments ? params[argIndex++].gpr() : InvalidGPRReg;
+                GPRReg thisGPR = params[argIndex++].gpr();
+
+                B3::ValueRep calleeLateRep;
+                B3::ValueRep argumentsLateRep;
+                B3::ValueRep thisLateRep;
+                if (!forwarding) {
+                    // If we're not forwarding then we'll need callee, arguments, and this after we
+                    // have potentially clobbered calleeGPR, argumentsGPR, and thisGPR. Our technique
+                    // for this is to supply all of those operands as late uses in addition to
+                    // specifying them as early uses. It's possible that the late use uses a spill
+                    // while the early use uses a register, and it's possible for the late and early
+                    // uses to use different registers. We do know that the late uses interfere with
+                    // all volatile registers and so won't use those, but the early uses may use
+                    // volatile registers and in the case of calleeGPR, it's pinned to regT0 so it
+                    // definitely will.
+                    //
+                    // Note that we have to be super careful with these. It's possible that these
+                    // use a shuffling of the registers used for calleeGPR, argumentsGPR, and
+                    // thisGPR. If that happens and we do for example:
+                    //
+                    //     calleeLateRep.emitRestore(jit, calleeGPR);
+                    //     argumentsLateRep.emitRestore(jit, calleeGPR);
+                    //
+                    // Then we might end up with garbage if calleeLateRep.gpr() == argumentsGPR and
+                    // argumentsLateRep.gpr() == calleeGPR.
+                    //
+                    // We do a variety of things to prevent this from happening. For example, we use
+                    // argumentsLateRep before needing the other two and after we've already stopped
+                    // using the *GPRs. Also, we pin calleeGPR to regT0, and rely on the fact that
+                    // the *LateReps cannot use volatile registers (so they cannot be regT0, so
+                    // calleeGPR != argumentsLateRep.gpr() and calleeGPR != thisLateRep.gpr()).
+                    //
+                    // An alternative would have been to just use early uses and early-clobber all
+                    // volatile registers. But that would force callee, arguments, and this into
+                    // callee-save registers even if we have to spill them. We don't want spilling to
+                    // use up three callee-saves.
+                    //
+                    // TL;DR: The way we use LateReps here is dangerous and barely works but achieves
+                    // some desirable performance properties, so don't mistake the cleverness for
+                    // elegance.
+                    calleeLateRep = params[argIndex++];
+                    argumentsLateRep = params[argIndex++];
+                    thisLateRep = params[argIndex++];
+                }
+
+                // Get some scratch registers.
+                RegisterSet usedRegisters;
+                usedRegisters.merge(RegisterSet::stackRegisters());
+                usedRegisters.merge(RegisterSet::reservedHardwareRegisters());
+                usedRegisters.merge(RegisterSet::calleeSaveRegisters());
+                usedRegisters.set(calleeGPR);
+                if (argumentsGPR != InvalidGPRReg)
+                    usedRegisters.set(argumentsGPR);
+                usedRegisters.set(thisGPR);
+                if (calleeLateRep.isReg())
+                    usedRegisters.set(calleeLateRep.reg());
+                if (argumentsLateRep.isReg())
+                    usedRegisters.set(argumentsLateRep.reg());
+                if (thisLateRep.isReg())
+                    usedRegisters.set(thisLateRep.reg());
+                ScratchRegisterAllocator allocator(usedRegisters);
+                GPRReg scratchGPR1 = allocator.allocateScratchGPR();
+                GPRReg scratchGPR2 = allocator.allocateScratchGPR();
+                GPRReg scratchGPR3 = forwarding ? allocator.allocateScratchGPR() : InvalidGPRReg;
+                RELEASE_ASSERT(!allocator.numberOfReusedRegisters());
+
+                auto callWithExceptionCheck = [&amp;] (void* callee) {
+                    jit.move(CCallHelpers::TrustedImmPtr(callee), GPRInfo::nonPreservedNonArgumentGPR);
+                    jit.call(GPRInfo::nonPreservedNonArgumentGPR);
+                    exceptions-&gt;append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
+                };
+
+                auto adjustStack = [&amp;] (GPRReg amount) {
+                    jit.addPtr(CCallHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), amount, CCallHelpers::stackPointerRegister);
+                };
+
+                unsigned originalStackHeight = params.proc().frameSize();
+
+                if (forwarding) {
+                    jit.move(CCallHelpers::TrustedImm32(originalStackHeight / sizeof(EncodedJSValue)), scratchGPR2);
+                    
+                    CCallHelpers::JumpList slowCase;
+                    emitSetupVarargsFrameFastCase(jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, node-&gt;child2()-&gt;origin.semantic.inlineCallFrame, data-&gt;firstVarArgOffset, slowCase);
+
+                    CCallHelpers::Jump done = jit.jump();
+                    slowCase.link(&amp;jit);
+                    jit.setupArgumentsExecState();
+                    callWithExceptionCheck(bitwise_cast&lt;void*&gt;(operationThrowStackOverflowForVarargs));
+                    jit.abortWithReason(DFGVarargsThrowingPathDidNotThrow);
+                    
+                    done.link(&amp;jit);
+
+                    adjustStack(scratchGPR2);
+                } else {
+                    jit.move(CCallHelpers::TrustedImm32(originalStackHeight / sizeof(EncodedJSValue)), scratchGPR1);
+                    jit.setupArgumentsWithExecState(argumentsGPR, scratchGPR1, CCallHelpers::TrustedImm32(data-&gt;firstVarArgOffset));
+                    callWithExceptionCheck(bitwise_cast&lt;void*&gt;(operationSizeFrameForVarargs));
+
+                    jit.move(GPRInfo::returnValueGPR, scratchGPR1);
+                    jit.move(CCallHelpers::TrustedImm32(originalStackHeight / sizeof(EncodedJSValue)), scratchGPR2);
+                    argumentsLateRep.emitRestore(jit, argumentsGPR);
+                    emitSetVarargsFrame(jit, scratchGPR1, false, scratchGPR2, scratchGPR2);
+                    jit.addPtr(CCallHelpers::TrustedImm32(-minimumJSCallAreaSize), scratchGPR2, CCallHelpers::stackPointerRegister);
+                    jit.setupArgumentsWithExecState(scratchGPR2, argumentsGPR, CCallHelpers::TrustedImm32(data-&gt;firstVarArgOffset), scratchGPR1);
+                    callWithExceptionCheck(bitwise_cast&lt;void*&gt;(operationSetupVarargsFrame));
+                    
+                    adjustStack(GPRInfo::returnValueGPR);
+
+                    calleeLateRep.emitRestore(jit, GPRInfo::regT0);
+
+                    // This may not emit code if thisGPR got a callee-save. Also, we're guaranteed
+                    // that thisGPR != GPRInfo::regT0 because regT0 interferes with it.
+                    thisLateRep.emitRestore(jit, thisGPR);
+                }
+                
+                jit.store64(GPRInfo::regT0, CCallHelpers::calleeFrameSlot(JSStack::Callee));
+                jit.store64(thisGPR, CCallHelpers::calleeArgumentSlot(0));
+                
+                CallLinkInfo::CallType callType;
+                if (node-&gt;op() == ConstructVarargs || node-&gt;op() == ConstructForwardVarargs)
+                    callType = CallLinkInfo::ConstructVarargs;
+                else if (node-&gt;op() == TailCallVarargs || node-&gt;op() == TailCallForwardVarargs)
+                    callType = CallLinkInfo::TailCallVarargs;
+                else
+                    callType = CallLinkInfo::CallVarargs;
+                
+                bool isTailCall = CallLinkInfo::callModeFor(callType) == CallMode::Tail;
+                
+                CCallHelpers::DataLabelPtr targetToCheck;
+                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
+                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
+                    CCallHelpers::TrustedImmPtr(0));
+                
+                CCallHelpers::Call fastCall;
+                CCallHelpers::Jump done;
+                
+                if (isTailCall) {
+                    jit.emitRestoreCalleeSaves();
+                    jit.prepareForTailCallSlow();
+                    fastCall = jit.nearTailCall();
+                } else {
+                    fastCall = jit.nearCall();
+                    done = jit.jump();
+                }
+                
+                slowPath.link(&amp;jit);
+
+                if (isTailCall)
+                    jit.emitRestoreCalleeSaves();
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                CCallHelpers::Call slowCall = jit.nearCall();
+                
+                if (isTailCall)
+                    jit.abortWithReason(JITDidReturnFromTailCall);
+                else
+                    done.link(&amp;jit);
+                
+                callLinkInfo-&gt;setUpCall(callType, node-&gt;origin.semantic, GPRInfo::regT0);
+                
+                jit.addPtr(
+                    CCallHelpers::TrustedImm32(-originalStackHeight),
+                    GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
+                
+                jit.addLinkTask(
+                    [=] (LinkBuffer&amp; linkBuffer) {
+                        MacroAssemblerCodePtr linkCall =
+                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
+                        
+                        callLinkInfo-&gt;setCallLocations(
+                            linkBuffer.locationOfNearCall(slowCall),
+                            linkBuffer.locationOf(targetToCheck),
+                            linkBuffer.locationOfNearCall(fastCall));
+                    });
+            });
+
+        switch (node-&gt;op()) {
+        case TailCallVarargs:
+        case TailCallForwardVarargs:
+            m_out.unreachable();
+            break;
+
+        default:
+            setJSValue(patchpoint);
+            break;
+        }
+    }
+    
+    void compileLoadVarargs()
+    {
+        LoadVarargsData* data = m_node-&gt;loadVarargsData();
+        LValue jsArguments = lowJSValue(m_node-&gt;child1());
+        
+        LValue length = vmCall(
+            m_out.int32, m_out.operation(operationSizeOfVarargs), m_callFrame, jsArguments,
+            m_out.constInt32(data-&gt;offset));
+        
+        // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+        // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+        // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+        // past the sizing.
+        // https://bugs.webkit.org/show_bug.cgi?id=141448
+        
+        LValue lengthIncludingThis = m_out.add(length, m_out.int32One);
+        speculate(
+            VarargsOverflow, noValue(), nullptr,
+            m_out.above(lengthIncludingThis, m_out.constInt32(data-&gt;limit)));
+        
+        m_out.store32(lengthIncludingThis, payloadFor(data-&gt;machineCount));
+        
+        // FIXME: This computation is rather silly. If operationLaodVarargs just took a pointer instead
+        // of a VirtualRegister, we wouldn't have to do this.
+        // https://bugs.webkit.org/show_bug.cgi?id=141660
+        LValue machineStart = m_out.lShr(
+            m_out.sub(addressFor(data-&gt;machineStart.offset()).value(), m_callFrame),
+            m_out.constIntPtr(3));
+        
+        vmCall(
+            m_out.voidType, m_out.operation(operationLoadVarargs), m_callFrame,
+            m_out.castToInt32(machineStart), jsArguments, m_out.constInt32(data-&gt;offset),
+            length, m_out.constInt32(data-&gt;mandatoryMinimum));
+    }
+    
+    void compileForwardVarargs()
+    {
+        LoadVarargsData* data = m_node-&gt;loadVarargsData();
+        InlineCallFrame* inlineCallFrame = m_node-&gt;child1()-&gt;origin.semantic.inlineCallFrame;
+        
+        LValue length = getArgumentsLength(inlineCallFrame).value;
+        LValue lengthIncludingThis = m_out.add(length, m_out.constInt32(1 - data-&gt;offset));
+        
+        speculate(
+            VarargsOverflow, noValue(), nullptr,
+            m_out.above(lengthIncludingThis, m_out.constInt32(data-&gt;limit)));
+        
+        m_out.store32(lengthIncludingThis, payloadFor(data-&gt;machineCount));
+        
+        LValue sourceStart = getArgumentsStart(inlineCallFrame);
+        LValue targetStart = addressFor(data-&gt;machineStart).value();
+
+        LBasicBlock undefinedLoop = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs undefined loop body&quot;));
+        LBasicBlock mainLoopEntry = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs main loop entry&quot;));
+        LBasicBlock mainLoop = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs main loop body&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs continuation&quot;));
+        
+        LValue lengthAsPtr = m_out.zeroExtPtr(length);
+        LValue loopBoundValue = m_out.constIntPtr(data-&gt;mandatoryMinimum);
+        ValueFromBlock loopBound = m_out.anchor(loopBoundValue);
+        m_out.branch(
+            m_out.above(loopBoundValue, lengthAsPtr), unsure(undefinedLoop), unsure(mainLoopEntry));
+        
+        LBasicBlock lastNext = m_out.appendTo(undefinedLoop, mainLoopEntry);
+        LValue previousIndex = m_out.phi(m_out.intPtr, loopBound);
+        LValue currentIndex = m_out.sub(previousIndex, m_out.intPtrOne);
+        m_out.store64(
+            m_out.constInt64(JSValue::encode(jsUndefined())),
+            m_out.baseIndex(m_heaps.variables, targetStart, currentIndex));
+        ValueFromBlock nextIndex = m_out.anchor(currentIndex);
+        m_out.addIncomingToPhi(previousIndex, nextIndex);
+        m_out.branch(
+            m_out.above(currentIndex, lengthAsPtr), unsure(undefinedLoop), unsure(mainLoopEntry));
+        
+        m_out.appendTo(mainLoopEntry, mainLoop);
+        loopBound = m_out.anchor(lengthAsPtr);
+        m_out.branch(m_out.notNull(lengthAsPtr), unsure(mainLoop), unsure(continuation));
+        
+        m_out.appendTo(mainLoop, continuation);
+        previousIndex = m_out.phi(m_out.intPtr, loopBound);
+        currentIndex = m_out.sub(previousIndex, m_out.intPtrOne);
+        LValue value = m_out.load64(
+            m_out.baseIndex(
+                m_heaps.variables, sourceStart,
+                m_out.add(currentIndex, m_out.constIntPtr(data-&gt;offset))));
+        m_out.store64(value, m_out.baseIndex(m_heaps.variables, targetStart, currentIndex));
+        nextIndex = m_out.anchor(currentIndex);
+        m_out.addIncomingToPhi(previousIndex, nextIndex);
+        m_out.branch(m_out.isNull(currentIndex), unsure(continuation), unsure(mainLoop));
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+
+    void compileJump()
+    {
+        m_out.jump(lowBlock(m_node-&gt;targetBlock()));
+    }
+    
+    void compileBranch()
+    {
+        m_out.branch(
+            boolify(m_node-&gt;child1()),
+            WeightedTarget(
+                lowBlock(m_node-&gt;branchData()-&gt;taken.block),
+                m_node-&gt;branchData()-&gt;taken.count),
+            WeightedTarget(
+                lowBlock(m_node-&gt;branchData()-&gt;notTaken.block),
+                m_node-&gt;branchData()-&gt;notTaken.count));
+    }
+    
+    void compileSwitch()
+    {
+        SwitchData* data = m_node-&gt;switchData();
+        switch (data-&gt;kind) {
+        case SwitchImm: {
+            Vector&lt;ValueFromBlock, 2&gt; intValues;
+            LBasicBlock switchOnInts = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm int case&quot;));
+            
+            LBasicBlock lastNext = m_out.appendTo(m_out.m_block, switchOnInts);
+            
+            switch (m_node-&gt;child1().useKind()) {
+            case Int32Use: {
+                intValues.append(m_out.anchor(lowInt32(m_node-&gt;child1())));
+                m_out.jump(switchOnInts);
+                break;
+            }
+                
+            case UntypedUse: {
+                LBasicBlock isInt = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm is int&quot;));
+                LBasicBlock isNotInt = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm is not int&quot;));
+                LBasicBlock isDouble = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm is double&quot;));
+                
+                LValue boxedValue = lowJSValue(m_node-&gt;child1());
+                m_out.branch(isNotInt32(boxedValue), unsure(isNotInt), unsure(isInt));
+                
+                LBasicBlock innerLastNext = m_out.appendTo(isInt, isNotInt);
+                
+                intValues.append(m_out.anchor(unboxInt32(boxedValue)));
+                m_out.jump(switchOnInts);
+                
+                m_out.appendTo(isNotInt, isDouble);
+                m_out.branch(
+                    isCellOrMisc(boxedValue, provenType(m_node-&gt;child1())),
+                    usually(lowBlock(data-&gt;fallThrough.block)), rarely(isDouble));
+                
+                m_out.appendTo(isDouble, innerLastNext);
+                LValue doubleValue = unboxDouble(boxedValue);
+                LValue intInDouble = m_out.doubleToInt(doubleValue);
+                intValues.append(m_out.anchor(intInDouble));
+                m_out.branch(
+                    m_out.doubleEqual(m_out.intToDouble(intInDouble), doubleValue),
+                    unsure(switchOnInts), unsure(lowBlock(data-&gt;fallThrough.block)));
+                break;
+            }
+                
+            default:
+                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+                break;
+            }
+            
+            m_out.appendTo(switchOnInts, lastNext);
+            buildSwitch(data, m_out.int32, m_out.phi(m_out.int32, intValues));
+            return;
+        }
+        
+        case SwitchChar: {
+            LValue stringValue;
+            
+            // FIXME: We should use something other than unsure() for the branch weight
+            // of the fallThrough block. The main challenge is just that we have multiple
+            // branches to fallThrough but a single count, so we would need to divvy it up
+            // among the different lowered branches.
+            // https://bugs.webkit.org/show_bug.cgi?id=129082
+            
+            switch (m_node-&gt;child1().useKind()) {
+            case StringUse: {
+                stringValue = lowString(m_node-&gt;child1());
+                break;
+            }
+                
+            case UntypedUse: {
+                LValue unboxedValue = lowJSValue(m_node-&gt;child1());
+                
+                LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar is cell&quot;));
+                LBasicBlock isStringCase = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar is string&quot;));
+                
+                m_out.branch(
+                    isNotCell(unboxedValue, provenType(m_node-&gt;child1())),
+                    unsure(lowBlock(data-&gt;fallThrough.block)), unsure(isCellCase));
+                
+                LBasicBlock lastNext = m_out.appendTo(isCellCase, isStringCase);
+                LValue cellValue = unboxedValue;
+                m_out.branch(
+                    isNotString(cellValue, provenType(m_node-&gt;child1())),
+                    unsure(lowBlock(data-&gt;fallThrough.block)), unsure(isStringCase));
+                
+                m_out.appendTo(isStringCase, lastNext);
+                stringValue = cellValue;
+                break;
+            }
+                
+            default:
+                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+                break;
+            }
+            
+            LBasicBlock lengthIs1 = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar length is 1&quot;));
+            LBasicBlock needResolution = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar resolution&quot;));
+            LBasicBlock resolved = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar resolved&quot;));
+            LBasicBlock is8Bit = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar 8bit&quot;));
+            LBasicBlock is16Bit = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar 16bit&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar continuation&quot;));
+            
+            m_out.branch(
+                m_out.notEqual(
+                    m_out.load32NonNegative(stringValue, m_heaps.JSString_length),
+                    m_out.int32One),
+                unsure(lowBlock(data-&gt;fallThrough.block)), unsure(lengthIs1));
+            
+            LBasicBlock lastNext = m_out.appendTo(lengthIs1, needResolution);
+            Vector&lt;ValueFromBlock, 2&gt; values;
+            LValue fastValue = m_out.loadPtr(stringValue, m_heaps.JSString_value);
+            values.append(m_out.anchor(fastValue));
+            m_out.branch(m_out.isNull(fastValue), rarely(needResolution), usually(resolved));
+            
+            m_out.appendTo(needResolution, resolved);
+            values.append(m_out.anchor(
+                vmCall(m_out.intPtr, m_out.operation(operationResolveRope), m_callFrame, stringValue)));
+            m_out.jump(resolved);
+            
+            m_out.appendTo(resolved, is8Bit);
+            LValue value = m_out.phi(m_out.intPtr, values);
+            LValue characterData = m_out.loadPtr(value, m_heaps.StringImpl_data);
+            m_out.branch(
+                m_out.testNonZero32(
+                    m_out.load32(value, m_heaps.StringImpl_hashAndFlags),
+                    m_out.constInt32(StringImpl::flagIs8Bit())),
+                unsure(is8Bit), unsure(is16Bit));
+            
+            Vector&lt;ValueFromBlock, 2&gt; characters;
+            m_out.appendTo(is8Bit, is16Bit);
+            characters.append(m_out.anchor(m_out.load8ZeroExt32(characterData, m_heaps.characters8[0])));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(is16Bit, continuation);
+            characters.append(m_out.anchor(m_out.load16ZeroExt32(characterData, m_heaps.characters16[0])));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            buildSwitch(data, m_out.int32, m_out.phi(m_out.int32, characters));
+            return;
+        }
+        
+        case SwitchString: {
+            switch (m_node-&gt;child1().useKind()) {
+            case StringIdentUse: {
+                LValue stringImpl = lowStringIdent(m_node-&gt;child1());
+                
+                Vector&lt;SwitchCase&gt; cases;
+                for (unsigned i = 0; i &lt; data-&gt;cases.size(); ++i) {
+                    LValue value = m_out.constIntPtr(data-&gt;cases[i].value.stringImpl());
+                    LBasicBlock block = lowBlock(data-&gt;cases[i].target.block);
+                    Weight weight = Weight(data-&gt;cases[i].target.count);
+                    cases.append(SwitchCase(value, block, weight));
+                }
+                
+                m_out.switchInstruction(
+                    stringImpl, cases, lowBlock(data-&gt;fallThrough.block),
+                    Weight(data-&gt;fallThrough.count));
+                return;
+            }
+                
+            case StringUse: {
+                switchString(data, lowString(m_node-&gt;child1()));
+                return;
+            }
+                
+            case UntypedUse: {
+                LValue value = lowJSValue(m_node-&gt;child1());
+                
+                LBasicBlock isCellBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString Untyped cell case&quot;));
+                LBasicBlock isStringBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString Untyped string case&quot;));
+                
+                m_out.branch(
+                    isCell(value, provenType(m_node-&gt;child1())),
+                    unsure(isCellBlock), unsure(lowBlock(data-&gt;fallThrough.block)));
+                
+                LBasicBlock lastNext = m_out.appendTo(isCellBlock, isStringBlock);
+                
+                m_out.branch(
+                    isString(value, provenType(m_node-&gt;child1())),
+                    unsure(isStringBlock), unsure(lowBlock(data-&gt;fallThrough.block)));
+                
+                m_out.appendTo(isStringBlock, lastNext);
+                
+                switchString(data, value);
+                return;
+            }
+                
+            default:
+                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+                return;
+            }
+            return;
+        }
+            
+        case SwitchCell: {
+            LValue cell;
+            switch (m_node-&gt;child1().useKind()) {
+            case CellUse: {
+                cell = lowCell(m_node-&gt;child1());
+                break;
+            }
+                
+            case UntypedUse: {
+                LValue value = lowJSValue(m_node-&gt;child1());
+                LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchCell cell case&quot;));
+                m_out.branch(
+                    isCell(value, provenType(m_node-&gt;child1())),
+                    unsure(cellCase), unsure(lowBlock(data-&gt;fallThrough.block)));
+                m_out.appendTo(cellCase);
+                cell = value;
+                break;
+            }
+                
+            default:
+                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+                return;
+            }
+            
+            buildSwitch(m_node-&gt;switchData(), m_out.intPtr, cell);
+            return;
+        } }
+        
+        DFG_CRASH(m_graph, m_node, &quot;Bad switch kind&quot;);
+    }
+    
+    void compileReturn()
+    {
+        m_out.ret(lowJSValue(m_node-&gt;child1()));
+    }
+    
+    void compileForceOSRExit()
+    {
+        terminate(InadequateCoverage);
+    }
+    
+    void compileThrow()
+    {
+        terminate(Uncountable);
+    }
+    
+    void compileInvalidationPoint()
+    {
+        if (verboseCompilationEnabled())
+            dataLog(&quot;    Invalidation point with availability: &quot;, availabilityMap(), &quot;\n&quot;);
+
+        DFG_ASSERT(m_graph, m_node, m_origin.exitOK);
+        
+        PatchpointValue* patchpoint = m_out.patchpoint(Void);
+        OSRExitDescriptor* descriptor = appendOSRExitDescriptor(noValue(), nullptr);
+        NodeOrigin origin = m_origin;
+        patchpoint-&gt;appendColdAnys(buildExitArguments(descriptor, origin.forExit, noValue()));
+        
+        State* state = &amp;m_ftlState;
+
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const B3::StackmapGenerationParams&amp; params) {
+                // The MacroAssembler knows more about this than B3 does. The watchpointLabel() method
+                // will ensure that this is followed by a nop shadow but only when this is actually
+                // necessary.
+                CCallHelpers::Label label = jit.watchpointLabel();
+
+                RefPtr&lt;OSRExitHandle&gt; handle = descriptor-&gt;emitOSRExitLater(
+                    *state, UncountableInvalidation, origin, params);
+
+                RefPtr&lt;JITCode&gt; jitCode = state-&gt;jitCode.get();
+
+                jit.addLinkTask(
+                    [=] (LinkBuffer&amp; linkBuffer) {
+                        JumpReplacement jumpReplacement(
+                            linkBuffer.locationOf(label),
+                            linkBuffer.locationOf(handle-&gt;label));
+                        jitCode-&gt;common.jumpReplacements.append(jumpReplacement);
+                    });
+            });
+
+        // Set some obvious things.
+        patchpoint-&gt;effects.terminal = false;
+        patchpoint-&gt;effects.writesLocalState = false;
+        patchpoint-&gt;effects.readsLocalState = false;
+        
+        // This is how we tell B3 about the possibility of jump replacement.
+        patchpoint-&gt;effects.exitsSideways = true;
+        
+        // It's not possible for some prior branch to determine the safety of this operation. It's always
+        // fine to execute this on some path that wouldn't have originally executed it before
+        // optimization.
+        patchpoint-&gt;effects.controlDependent = false;
+
+        // If this falls through then it won't write anything.
+        patchpoint-&gt;effects.writes = HeapRange();
+
+        // When this abruptly terminates, it could read any heap location.
+        patchpoint-&gt;effects.reads = HeapRange::top();
+    }
+    
+    void compileIsUndefined()
+    {
+        setBoolean(equalNullOrUndefined(m_node-&gt;child1(), AllCellsAreFalse, EqualUndefined));
+    }
+    
+    void compileIsBoolean()
+    {
+        setBoolean(isBoolean(lowJSValue(m_node-&gt;child1()), provenType(m_node-&gt;child1())));
+    }
+    
+    void compileIsNumber()
+    {
+        setBoolean(isNumber(lowJSValue(m_node-&gt;child1()), provenType(m_node-&gt;child1())));
+    }
+    
+    void compileIsString()
+    {
+        LValue value = lowJSValue(m_node-&gt;child1());
+        
+        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;IsString cell case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsString continuation&quot;));
+        
+        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(
+            isCell(value, provenType(m_node-&gt;child1())), unsure(isCellCase), unsure(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(isCellCase, continuation);
+        ValueFromBlock cellResult = m_out.anchor(isString(value, provenType(m_node-&gt;child1())));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(m_out.phi(m_out.boolean, notCellResult, cellResult));
+    }
+
+    void compileIsObject()
+    {
+        LValue value = lowJSValue(m_node-&gt;child1());
+
+        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;IsObject cell case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsObject continuation&quot;));
+
+        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(
+            isCell(value, provenType(m_node-&gt;child1())), unsure(isCellCase), unsure(continuation));
+
+        LBasicBlock lastNext = m_out.appendTo(isCellCase, continuation);
+        ValueFromBlock cellResult = m_out.anchor(isObject(value, provenType(m_node-&gt;child1())));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(m_out.phi(m_out.boolean, notCellResult, cellResult));
+    }
+
+    void compileIsObjectOrNull()
+    {
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        
+        Edge child = m_node-&gt;child1();
+        LValue value = lowJSValue(child);
+        
+        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull cell case&quot;));
+        LBasicBlock notFunctionCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull not function case&quot;));
+        LBasicBlock objectCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull object case&quot;));
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull slow path&quot;));
+        LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull not cell case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull continuation&quot;));
+        
+        m_out.branch(isCell(value, provenType(child)), unsure(cellCase), unsure(notCellCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(cellCase, notFunctionCase);
+        ValueFromBlock isFunctionResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(
+            isFunction(value, provenType(child)),
+            unsure(continuation), unsure(notFunctionCase));
+        
+        m_out.appendTo(notFunctionCase, objectCase);
+        ValueFromBlock notObjectResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(
+            isObject(value, provenType(child)),
+            unsure(objectCase), unsure(continuation));
+        
+        m_out.appendTo(objectCase, slowPath);
+        ValueFromBlock objectResult = m_out.anchor(m_out.booleanTrue);
+        m_out.branch(
+            isExoticForTypeof(value, provenType(child)),
+            rarely(slowPath), usually(continuation));
+        
+        m_out.appendTo(slowPath, notCellCase);
+        LValue slowResultValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationObjectIsObject, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
+            }, value);
+        ValueFromBlock slowResult = m_out.anchor(m_out.notZero64(slowResultValue));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(notCellCase, continuation);
+        LValue notCellResultValue = m_out.equal(value, m_out.constInt64(JSValue::encode(jsNull())));
+        ValueFromBlock notCellResult = m_out.anchor(notCellResultValue);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        LValue result = m_out.phi(
+            m_out.boolean,
+            isFunctionResult, notObjectResult, objectResult, slowResult, notCellResult);
+        setBoolean(result);
+    }
+    
+    void compileIsFunction()
+    {
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        
+        Edge child = m_node-&gt;child1();
+        LValue value = lowJSValue(child);
+        
+        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;IsFunction cell case&quot;));
+        LBasicBlock notFunctionCase = FTL_NEW_BLOCK(m_out, (&quot;IsFunction not function case&quot;));
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;IsFunction slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsFunction continuation&quot;));
+        
+        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(
+            isCell(value, provenType(child)), unsure(cellCase), unsure(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(cellCase, notFunctionCase);
+        ValueFromBlock functionResult = m_out.anchor(m_out.booleanTrue);
+        m_out.branch(
+            isFunction(value, provenType(child)),
+            unsure(continuation), unsure(notFunctionCase));
+        
+        m_out.appendTo(notFunctionCase, slowPath);
+        ValueFromBlock objectResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(
+            isExoticForTypeof(value, provenType(child)),
+            rarely(slowPath), usually(continuation));
+        
+        m_out.appendTo(slowPath, continuation);
+        LValue slowResultValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationObjectIsFunction, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
+            }, value);
+        ValueFromBlock slowResult = m_out.anchor(m_out.notNull(slowResultValue));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        LValue result = m_out.phi(
+            m_out.boolean, notCellResult, functionResult, objectResult, slowResult);
+        setBoolean(result);
+    }
+    
+    void compileTypeOf()
+    {
+        Edge child = m_node-&gt;child1();
+        LValue value = lowJSValue(child);
+        
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;TypeOf continuation&quot;));
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+        
+        Vector&lt;ValueFromBlock&gt; results;
+        
+        buildTypeOf(
+            child, value,
+            [&amp;] (TypeofType type) {
+                results.append(m_out.anchor(weakPointer(vm().smallStrings.typeString(type))));
+                m_out.jump(continuation);
+            });
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, results));
+    }
+    
+    void compileIn()
+    {
+        Node* node = m_node;
+        Edge base = node-&gt;child2();
+        LValue cell = lowCell(base);
+        if (JSString* string = node-&gt;child1()-&gt;dynamicCastConstant&lt;JSString*&gt;()) {
+            if (string-&gt;tryGetValueImpl() &amp;&amp; string-&gt;tryGetValueImpl()-&gt;isAtomic()) {
+                UniquedStringImpl* str = bitwise_cast&lt;UniquedStringImpl*&gt;(string-&gt;tryGetValueImpl());
+                B3::PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+                patchpoint-&gt;appendSomeRegister(cell);
+                patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+
+                State* state = &amp;m_ftlState;
+                patchpoint-&gt;setGenerator(
+                    [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                        AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                        GPRReg baseGPR = params[1].gpr();
+                        GPRReg resultGPR = params[0].gpr();
+
+                        StructureStubInfo* stubInfo =
+                            jit.codeBlock()-&gt;addStubInfo(AccessType::In);
+                        stubInfo-&gt;callSiteIndex =
+                            state-&gt;jitCode-&gt;common.addCodeOrigin(node-&gt;origin.semantic);
+                        stubInfo-&gt;codeOrigin = node-&gt;origin.semantic;
+                        stubInfo-&gt;patch.baseGPR = static_cast&lt;int8_t&gt;(baseGPR);
+                        stubInfo-&gt;patch.valueGPR = static_cast&lt;int8_t&gt;(resultGPR);
+                        stubInfo-&gt;patch.usedRegisters = params.unavailableRegisters();
+
+                        CCallHelpers::PatchableJump jump = jit.patchableJump();
+                        CCallHelpers::Label done = jit.label();
+
+                        params.addLatePath(
+                            [=] (CCallHelpers&amp; jit) {
+                                AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                                jump.m_jump.link(&amp;jit);
+                                CCallHelpers::Label slowPathBegin = jit.label();
+                                CCallHelpers::Call slowPathCall = callOperation(
+                                    *state, params.unavailableRegisters(), jit,
+                                    node-&gt;origin.semantic, nullptr, operationInOptimize,
+                                    resultGPR, CCallHelpers::TrustedImmPtr(stubInfo), baseGPR,
+                                    CCallHelpers::TrustedImmPtr(str)).call();
+                                jit.jump().linkTo(done, &amp;jit);
+
+                                jit.addLinkTask(
+                                    [=] (LinkBuffer&amp; linkBuffer) {
+                                        CodeLocationCall callReturnLocation =
+                                            linkBuffer.locationOf(slowPathCall);
+                                        stubInfo-&gt;patch.deltaCallToDone =
+                                            CCallHelpers::differenceBetweenCodePtr(
+                                                callReturnLocation,
+                                                linkBuffer.locationOf(done));
+                                        stubInfo-&gt;patch.deltaCallToJump =
+                                            CCallHelpers::differenceBetweenCodePtr(
+                                                callReturnLocation,
+                                                linkBuffer.locationOf(jump));
+                                        stubInfo-&gt;callReturnLocation = callReturnLocation;
+                                        stubInfo-&gt;patch.deltaCallToSlowCase =
+                                            CCallHelpers::differenceBetweenCodePtr(
+                                                callReturnLocation,
+                                                linkBuffer.locationOf(slowPathBegin));
+                                    });
+                            });
+                    });
+
+                setJSValue(patchpoint);
+                return;
+            }
+        } 
+
+        setJSValue(vmCall(m_out.int64, m_out.operation(operationGenericIn), m_callFrame, cell, lowJSValue(m_node-&gt;child1())));
+    }
+
+    void compileOverridesHasInstance()
+    {
+        JSFunction* defaultHasInstanceFunction = jsCast&lt;JSFunction*&gt;(m_node-&gt;cellOperand()-&gt;value());
+
+        LValue constructor = lowCell(m_node-&gt;child1());
+        LValue hasInstance = lowJSValue(m_node-&gt;child2());
+
+        LBasicBlock defaultHasInstance = FTL_NEW_BLOCK(m_out, (&quot;OverridesHasInstance Symbol.hasInstance is default&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;OverridesHasInstance continuation&quot;));
+
+        // Unlike in the DFG, we don't worry about cleaning this code up for the case where we have proven the hasInstanceValue is a constant as B3 should fix it for us.
+
+        ASSERT(!m_node-&gt;child2().node()-&gt;isCellConstant() || defaultHasInstanceFunction == m_node-&gt;child2().node()-&gt;asCell());
+
+        ValueFromBlock notDefaultHasInstanceResult = m_out.anchor(m_out.booleanTrue);
+        m_out.branch(m_out.notEqual(hasInstance, m_out.constIntPtr(defaultHasInstanceFunction)), unsure(continuation), unsure(defaultHasInstance));
+
+        LBasicBlock lastNext = m_out.appendTo(defaultHasInstance, continuation);
+        ValueFromBlock implementsDefaultHasInstanceResult = m_out.anchor(m_out.testIsZero32(
+            m_out.load8ZeroExt32(constructor, m_heaps.JSCell_typeInfoFlags),
+            m_out.constInt32(ImplementsDefaultHasInstance)));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(m_out.phi(m_out.boolean, implementsDefaultHasInstanceResult, notDefaultHasInstanceResult));
+    }
+
+    void compileCheckTypeInfoFlags()
+    {
+        speculate(
+            BadTypeInfoFlags, noValue(), 0,
+            m_out.testIsZero32(
+                m_out.load8ZeroExt32(lowCell(m_node-&gt;child1()), m_heaps.JSCell_typeInfoFlags),
+                m_out.constInt32(m_node-&gt;typeInfoOperand())));
+    }
+    
+    void compileInstanceOf()
+    {
+        LValue cell;
+        
+        if (m_node-&gt;child1().useKind() == UntypedUse)
+            cell = lowJSValue(m_node-&gt;child1());
+        else
+            cell = lowCell(m_node-&gt;child1());
+        
+        LValue prototype = lowCell(m_node-&gt;child2());
+        
+        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf cell case&quot;));
+        LBasicBlock loop = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf loop&quot;));
+        LBasicBlock notYetInstance = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf not yet instance&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf continuation&quot;));
+        
+        LValue condition;
+        if (m_node-&gt;child1().useKind() == UntypedUse)
+            condition = isCell(cell, provenType(m_node-&gt;child1()));
+        else
+            condition = m_out.booleanTrue;
+        
+        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
+        m_out.branch(condition, unsure(isCellCase), unsure(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(isCellCase, loop);
+        
+        speculate(BadType, noValue(), 0, isNotObject(prototype, provenType(m_node-&gt;child2())));
+        
+        ValueFromBlock originalValue = m_out.anchor(cell);
+        m_out.jump(loop);
+        
+        m_out.appendTo(loop, notYetInstance);
+        LValue value = m_out.phi(m_out.int64, originalValue);
+        LValue structure = loadStructure(value);
+        LValue currentPrototype = m_out.load64(structure, m_heaps.Structure_prototype);
+        ValueFromBlock isInstanceResult = m_out.anchor(m_out.booleanTrue);
+        m_out.branch(
+            m_out.equal(currentPrototype, prototype),
+            unsure(continuation), unsure(notYetInstance));
+        
+        m_out.appendTo(notYetInstance, continuation);
+        ValueFromBlock notInstanceResult = m_out.anchor(m_out.booleanFalse);
+        m_out.addIncomingToPhi(value, m_out.anchor(currentPrototype));
+        m_out.branch(isCell(currentPrototype), unsure(loop), unsure(continuation));
+        
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(
+            m_out.phi(m_out.boolean, notCellResult, isInstanceResult, notInstanceResult));
+    }
+
+    void compileInstanceOfCustom()
+    {
+        LValue value = lowJSValue(m_node-&gt;child1());
+        LValue constructor = lowCell(m_node-&gt;child2());
+        LValue hasInstance = lowJSValue(m_node-&gt;child3());
+
+        setBoolean(m_out.logicalNot(m_out.equal(m_out.constInt32(0), vmCall(m_out.int32, m_out.operation(operationInstanceOfCustom), m_callFrame, value, constructor, hasInstance))));
+    }
+    
+    void compileCountExecution()
+    {
+        TypedPointer counter = m_out.absolute(m_node-&gt;executionCounter()-&gt;address());
+        m_out.store64(m_out.add(m_out.load64(counter), m_out.constInt64(1)), counter);
+    }
+    
+    void compileStoreBarrier()
+    {
+        emitStoreBarrier(lowCell(m_node-&gt;child1()));
+    }
+
+    void compileHasIndexedProperty()
+    {
+        switch (m_node-&gt;arrayMode().type()) {
+        case Array::Int32:
+        case Array::Contiguous: {
+            LValue base = lowCell(m_node-&gt;child1());
+            LValue index = lowInt32(m_node-&gt;child2());
+            LValue storage = lowStorage(m_node-&gt;child3());
+
+            IndexedAbstractHeap&amp; heap = m_node-&gt;arrayMode().type() == Array::Int32 ?
+                m_heaps.indexedInt32Properties : m_heaps.indexedContiguousProperties;
+
+            LBasicBlock checkHole = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty int/contiguous check hole&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty int/contiguous slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty int/contiguous continuation&quot;));
+
+            if (!m_node-&gt;arrayMode().isInBounds()) {
+                m_out.branch(
+                    m_out.aboveOrEqual(
+                        index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
+                    rarely(slowCase), usually(checkHole));
+            } else
+                m_out.jump(checkHole);
+
+            LBasicBlock lastNext = m_out.appendTo(checkHole, slowCase);
+            LValue checkHoleResultValue =
+                m_out.notZero64(m_out.load64(baseIndex(heap, storage, index, m_node-&gt;child2())));
+            ValueFromBlock checkHoleResult = m_out.anchor(checkHoleResultValue);
+            m_out.branch(checkHoleResultValue, usually(continuation), rarely(slowCase));
+
+            m_out.appendTo(slowCase, continuation);
+            ValueFromBlock slowResult = m_out.anchor(m_out.equal(
+                m_out.constInt64(JSValue::encode(jsBoolean(true))), 
+                vmCall(m_out.int64, m_out.operation(operationHasIndexedProperty), m_callFrame, base, index)));
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            setBoolean(m_out.phi(m_out.boolean, checkHoleResult, slowResult));
+            return;
+        }
+        case Array::Double: {
+            LValue base = lowCell(m_node-&gt;child1());
+            LValue index = lowInt32(m_node-&gt;child2());
+            LValue storage = lowStorage(m_node-&gt;child3());
+            
+            IndexedAbstractHeap&amp; heap = m_heaps.indexedDoubleProperties;
+            
+            LBasicBlock checkHole = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty double check hole&quot;));
+            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty double slow case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty double continuation&quot;));
+            
+            if (!m_node-&gt;arrayMode().isInBounds()) {
+                m_out.branch(
+                    m_out.aboveOrEqual(
+                        index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
+                    rarely(slowCase), usually(checkHole));
+            } else
+                m_out.jump(checkHole);
+
+            LBasicBlock lastNext = m_out.appendTo(checkHole, slowCase);
+            LValue doubleValue = m_out.loadDouble(baseIndex(heap, storage, index, m_node-&gt;child2()));
+            LValue checkHoleResultValue = m_out.doubleEqual(doubleValue, doubleValue);
+            ValueFromBlock checkHoleResult = m_out.anchor(checkHoleResultValue);
+            m_out.branch(checkHoleResultValue, usually(continuation), rarely(slowCase));
+            
+            m_out.appendTo(slowCase, continuation);
+            ValueFromBlock slowResult = m_out.anchor(m_out.equal(
+                m_out.constInt64(JSValue::encode(jsBoolean(true))), 
+                vmCall(m_out.int64, m_out.operation(operationHasIndexedProperty), m_callFrame, base, index)));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            setBoolean(m_out.phi(m_out.boolean, checkHoleResult, slowResult));
+            return;
+        }
+            
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+            return;
+        }
+    }
+
+    void compileHasGenericProperty()
+    {
+        LValue base = lowJSValue(m_node-&gt;child1());
+        LValue property = lowCell(m_node-&gt;child2());
+        setJSValue(vmCall(m_out.int64, m_out.operation(operationHasGenericProperty), m_callFrame, base, property));
+    }
+
+    void compileHasStructureProperty()
+    {
+        LValue base = lowJSValue(m_node-&gt;child1());
+        LValue property = lowString(m_node-&gt;child2());
+        LValue enumerator = lowCell(m_node-&gt;child3());
+
+        LBasicBlock correctStructure = FTL_NEW_BLOCK(m_out, (&quot;HasStructureProperty correct structure&quot;));
+        LBasicBlock wrongStructure = FTL_NEW_BLOCK(m_out, (&quot;HasStructureProperty wrong structure&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;HasStructureProperty continuation&quot;));
+
+        m_out.branch(m_out.notEqual(
+            m_out.load32(base, m_heaps.JSCell_structureID),
+            m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedStructureID)),
+            rarely(wrongStructure), usually(correctStructure));
+
+        LBasicBlock lastNext = m_out.appendTo(correctStructure, wrongStructure);
+        ValueFromBlock correctStructureResult = m_out.anchor(m_out.booleanTrue);
+        m_out.jump(continuation);
+
+        m_out.appendTo(wrongStructure, continuation);
+        ValueFromBlock wrongStructureResult = m_out.anchor(
+            m_out.equal(
+                m_out.constInt64(JSValue::encode(jsBoolean(true))), 
+                vmCall(m_out.int64, m_out.operation(operationHasGenericProperty), m_callFrame, base, property)));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(m_out.phi(m_out.boolean, correctStructureResult, wrongStructureResult));
+    }
+
+    void compileGetDirectPname()
+    {
+        LValue base = lowCell(m_graph.varArgChild(m_node, 0));
+        LValue property = lowCell(m_graph.varArgChild(m_node, 1));
+        LValue index = lowInt32(m_graph.varArgChild(m_node, 2));
+        LValue enumerator = lowCell(m_graph.varArgChild(m_node, 3));
+
+        LBasicBlock checkOffset = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname check offset&quot;));
+        LBasicBlock inlineLoad = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname inline load&quot;));
+        LBasicBlock outOfLineLoad = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname out-of-line load&quot;));
+        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname slow case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname continuation&quot;));
+
+        m_out.branch(m_out.notEqual(
+            m_out.load32(base, m_heaps.JSCell_structureID),
+            m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedStructureID)),
+            rarely(slowCase), usually(checkOffset));
+
+        LBasicBlock lastNext = m_out.appendTo(checkOffset, inlineLoad);
+        m_out.branch(m_out.aboveOrEqual(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedInlineCapacity)),
+            unsure(outOfLineLoad), unsure(inlineLoad));
+
+        m_out.appendTo(inlineLoad, outOfLineLoad);
+        ValueFromBlock inlineResult = m_out.anchor(
+            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), 
+                base, m_out.zeroExt(index, m_out.int64), ScaleEight, JSObject::offsetOfInlineStorage())));
+        m_out.jump(continuation);
+
+        m_out.appendTo(outOfLineLoad, slowCase);
+        LValue storage = loadButterflyReadOnly(base);
+        LValue realIndex = m_out.signExt32To64(
+            m_out.neg(m_out.sub(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedInlineCapacity))));
+        int32_t offsetOfFirstProperty = static_cast&lt;int32_t&gt;(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue);
+        ValueFromBlock outOfLineResult = m_out.anchor(
+            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), storage, realIndex, ScaleEight, offsetOfFirstProperty)));
+        m_out.jump(continuation);
+
+        m_out.appendTo(slowCase, continuation);
+        ValueFromBlock slowCaseResult = m_out.anchor(
+            vmCall(m_out.int64, m_out.operation(operationGetByVal), m_callFrame, base, property));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, inlineResult, outOfLineResult, slowCaseResult));
+    }
+
+    void compileGetEnumerableLength()
+    {
+        LValue enumerator = lowCell(m_node-&gt;child1());
+        setInt32(m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_indexLength));
+    }
+
+    void compileGetPropertyEnumerator()
+    {
+        LValue base = lowCell(m_node-&gt;child1());
+        setJSValue(vmCall(m_out.int64, m_out.operation(operationGetPropertyEnumerator), m_callFrame, base));
+    }
+
+    void compileGetEnumeratorStructurePname()
+    {
+        LValue enumerator = lowCell(m_node-&gt;child1());
+        LValue index = lowInt32(m_node-&gt;child2());
+
+        LBasicBlock inBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorStructurePname in bounds&quot;));
+        LBasicBlock outOfBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorStructurePname out of bounds&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorStructurePname continuation&quot;));
+
+        m_out.branch(m_out.below(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_endStructurePropertyIndex)),
+            usually(inBounds), rarely(outOfBounds));
+
+        LBasicBlock lastNext = m_out.appendTo(inBounds, outOfBounds);
+        LValue storage = m_out.loadPtr(enumerator, m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVector);
+        ValueFromBlock inBoundsResult = m_out.anchor(
+            m_out.loadPtr(m_out.baseIndex(m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, storage, m_out.zeroExtPtr(index))));
+        m_out.jump(continuation);
+
+        m_out.appendTo(outOfBounds, continuation);
+        ValueFromBlock outOfBoundsResult = m_out.anchor(m_out.constInt64(ValueNull));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, inBoundsResult, outOfBoundsResult));
+    }
+
+    void compileGetEnumeratorGenericPname()
+    {
+        LValue enumerator = lowCell(m_node-&gt;child1());
+        LValue index = lowInt32(m_node-&gt;child2());
+
+        LBasicBlock inBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorGenericPname in bounds&quot;));
+        LBasicBlock outOfBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorGenericPname out of bounds&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorGenericPname continuation&quot;));
+
+        m_out.branch(m_out.below(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_endGenericPropertyIndex)),
+            usually(inBounds), rarely(outOfBounds));
+
+        LBasicBlock lastNext = m_out.appendTo(inBounds, outOfBounds);
+        LValue storage = m_out.loadPtr(enumerator, m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVector);
+        ValueFromBlock inBoundsResult = m_out.anchor(
+            m_out.loadPtr(m_out.baseIndex(m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, storage, m_out.zeroExtPtr(index))));
+        m_out.jump(continuation);
+
+        m_out.appendTo(outOfBounds, continuation);
+        ValueFromBlock outOfBoundsResult = m_out.anchor(m_out.constInt64(ValueNull));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setJSValue(m_out.phi(m_out.int64, inBoundsResult, outOfBoundsResult));
+    }
+    
+    void compileToIndexString()
+    {
+        LValue index = lowInt32(m_node-&gt;child1());
+        setJSValue(vmCall(m_out.int64, m_out.operation(operationToIndexString), m_callFrame, index));
+    }
+    
+    void compileCheckStructureImmediate()
+    {
+        LValue structure = lowCell(m_node-&gt;child1());
+        checkStructure(
+            structure, noValue(), BadCache, m_node-&gt;structureSet(),
+            [this] (Structure* structure) {
+                return weakStructure(structure);
+            });
+    }
+    
+    void compileMaterializeNewObject()
+    {
+        ObjectMaterializationData&amp; data = m_node-&gt;objectMaterializationData();
+        
+        // Lower the values first, to avoid creating values inside a control flow diamond.
+        
+        Vector&lt;LValue, 8&gt; values;
+        for (unsigned i = 0; i &lt; data.m_properties.size(); ++i)
+            values.append(lowJSValue(m_graph.varArgChild(m_node, 1 + i)));
+        
+        const StructureSet&amp; set = m_node-&gt;structureSet();
+        
+        Vector&lt;LBasicBlock, 1&gt; blocks(set.size());
+        for (unsigned i = set.size(); i--;)
+            blocks[i] = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject case &quot;, i));
+        LBasicBlock dummyDefault = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject default case&quot;));
+        LBasicBlock outerContinuation = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject continuation&quot;));
+        
+        Vector&lt;SwitchCase, 1&gt; cases(set.size());
+        for (unsigned i = set.size(); i--;)
+            cases[i] = SwitchCase(weakStructure(set[i]), blocks[i], Weight(1));
+        m_out.switchInstruction(
+            lowCell(m_graph.varArgChild(m_node, 0)), cases, dummyDefault, Weight(0));
+        
+        LBasicBlock outerLastNext = m_out.m_nextBlock;
+        
+        Vector&lt;ValueFromBlock, 1&gt; results;
+        
+        for (unsigned i = set.size(); i--;) {
+            m_out.appendTo(blocks[i], i + 1 &lt; set.size() ? blocks[i + 1] : dummyDefault);
+            
+            Structure* structure = set[i];
+            
+            LValue object;
+            LValue butterfly;
+            
+            if (structure-&gt;outOfLineCapacity()) {
+                size_t allocationSize = JSFinalObject::allocationSize(structure-&gt;inlineCapacity());
+                MarkedAllocator* allocator = &amp;vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+                
+                LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject complex object allocation slow path&quot;));
+                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject complex object allocation continuation&quot;));
+                
+                LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+                
+                LValue endOfStorage = allocateBasicStorageAndGetEnd(
+                    m_out.constIntPtr(structure-&gt;outOfLineCapacity() * sizeof(JSValue)),
+                    slowPath);
+                
+                LValue fastButterflyValue = m_out.add(
+                    m_out.constIntPtr(sizeof(IndexingHeader)), endOfStorage);
+                
+                LValue fastObjectValue = allocateObject(
+                    m_out.constIntPtr(allocator), structure, fastButterflyValue, slowPath);
+
+                ValueFromBlock fastObject = m_out.anchor(fastObjectValue);
+                ValueFromBlock fastButterfly = m_out.anchor(fastButterflyValue);
+                m_out.jump(continuation);
+                
+                m_out.appendTo(slowPath, continuation);
+
+                LValue slowObjectValue = lazySlowPath(
+                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                        return createLazyCallGenerator(
+                            operationNewObjectWithButterfly, locations[0].directGPR(),
+                            CCallHelpers::TrustedImmPtr(structure));
+                    });
+                ValueFromBlock slowObject = m_out.anchor(slowObjectValue);
+                ValueFromBlock slowButterfly = m_out.anchor(
+                    m_out.loadPtr(slowObjectValue, m_heaps.JSObject_butterfly));
+                
+                m_out.jump(continuation);
+                
+                m_out.appendTo(continuation, lastNext);
+                
+                object = m_out.phi(m_out.intPtr, fastObject, slowObject);
+                butterfly = m_out.phi(m_out.intPtr, fastButterfly, slowButterfly);
+            } else {
+                // In the easy case where we can do a one-shot allocation, we simply allocate the
+                // object to directly have the desired structure.
+                object = allocateObject(structure);
+                butterfly = nullptr; // Don't have one, don't need one.
+            }
+            
+            for (PropertyMapEntry entry : structure-&gt;getPropertiesConcurrently()) {
+                for (unsigned i = data.m_properties.size(); i--;) {
+                    PhantomPropertyValue value = data.m_properties[i];
+                    if (m_graph.identifiers()[value.m_identifierNumber] != entry.key)
+                        continue;
+                    
+                    LValue base = isInlineOffset(entry.offset) ? object : butterfly;
+                    storeProperty(values[i], base, value.m_identifierNumber, entry.offset);
+                    break;
+                }
+            }
+            
+            results.append(m_out.anchor(object));
+            m_out.jump(outerContinuation);
+        }
+        
+        m_out.appendTo(dummyDefault, outerContinuation);
+        m_out.unreachable();
+        
+        m_out.appendTo(outerContinuation, outerLastNext);
+        setJSValue(m_out.phi(m_out.intPtr, results));
+    }
+
+    void compileMaterializeCreateActivation()
+    {
+        ObjectMaterializationData&amp; data = m_node-&gt;objectMaterializationData();
+
+        Vector&lt;LValue, 8&gt; values;
+        for (unsigned i = 0; i &lt; data.m_properties.size(); ++i)
+            values.append(lowJSValue(m_graph.varArgChild(m_node, 2 + i)));
+
+        LValue scope = lowCell(m_graph.varArgChild(m_node, 1));
+        SymbolTable* table = m_node-&gt;castOperand&lt;SymbolTable*&gt;();
+        ASSERT(table == m_graph.varArgChild(m_node, 0)-&gt;castConstant&lt;SymbolTable*&gt;());
+        Structure* structure = m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;activationStructure();
+
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;MaterializeCreateActivation slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MaterializeCreateActivation continuation&quot;));
+
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+
+        LValue fastObject = allocateObject&lt;JSLexicalEnvironment&gt;(
+            JSLexicalEnvironment::allocationSize(table), structure, m_out.intPtrZero, slowPath);
+
+        m_out.storePtr(scope, fastObject, m_heaps.JSScope_next);
+        m_out.storePtr(weakPointer(table), fastObject, m_heaps.JSSymbolTableObject_symbolTable);
+
+
+        ValueFromBlock fastResult = m_out.anchor(fastObject);
+        m_out.jump(continuation);
+
+        m_out.appendTo(slowPath, continuation);
+        // We ensure allocation sinking explictly sets bottom values for all field members. 
+        // Therefore, it doesn't matter what JSValue we pass in as the initialization value
+        // because all fields will be overwritten.
+        // FIXME: It may be worth creating an operation that calls a constructor on JSLexicalEnvironment that 
+        // doesn't initialize every slot because we are guaranteed to do that here.
+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationCreateActivationDirect, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
+                    CCallHelpers::TrustedImmPtr(table),
+                    CCallHelpers::TrustedImm64(JSValue::encode(jsUndefined())));
+            }, scope);
+        ValueFromBlock slowResult =  m_out.anchor(callResult);
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        LValue activation = m_out.phi(m_out.intPtr, fastResult, slowResult);
+        RELEASE_ASSERT(data.m_properties.size() == table-&gt;scopeSize());
+        for (unsigned i = 0; i &lt; data.m_properties.size(); ++i) {
+            m_out.store64(values[i],
+                activation,
+                m_heaps.JSEnvironmentRecord_variables[data.m_properties[i].m_identifierNumber]);
+        }
+
+        if (validationEnabled()) {
+            // Validate to make sure every slot in the scope has one value.
+            ConcurrentJITLocker locker(table-&gt;m_lock);
+            for (auto iter = table-&gt;begin(locker), end = table-&gt;end(locker); iter != end; ++iter) {
+                bool found = false;
+                for (unsigned i = 0; i &lt; data.m_properties.size(); ++i) {
+                    if (iter-&gt;value.scopeOffset().offset() == data.m_properties[i].m_identifierNumber) {
+                        found = true;
+                        break;
+                    }
+                }
+                ASSERT_UNUSED(found, found);
+            }
+        }
+
+        setJSValue(activation);
+    }
+
+    void compileCheckWatchdogTimer()
+    {
+        LBasicBlock timerDidFire = FTL_NEW_BLOCK(m_out, (&quot;CheckWatchdogTimer timer did fire&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CheckWatchdogTimer continuation&quot;));
+        
+        LValue state = m_out.load8ZeroExt32(m_out.absolute(vm().watchdog()-&gt;timerDidFireAddress()));
+        m_out.branch(m_out.isZero32(state),
+            usually(continuation), rarely(timerDidFire));
+
+        LBasicBlock lastNext = m_out.appendTo(timerDidFire, continuation);
+
+        lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp;) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(operationHandleWatchdogTimer, InvalidGPRReg);
+            });
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+
+    LValue didOverflowStack()
+    {
+        // This does a very simple leaf function analysis. The invariant of FTL call
+        // frames is that the caller had already done enough of a stack check to
+        // prove that this call frame has enough stack to run, and also enough stack
+        // to make runtime calls. So, we only need to stack check when making calls
+        // to other JS functions. If we don't find such calls then we don't need to
+        // do any stack checks.
+        
+        for (BlockIndex blockIndex = 0; blockIndex &lt; m_graph.numBlocks(); ++blockIndex) {
+            DFG::BasicBlock* block = m_graph.block(blockIndex);
+            if (!block)
+                continue;
+            
+            for (unsigned nodeIndex = block-&gt;size(); nodeIndex--;) {
+                Node* node = block-&gt;at(nodeIndex);
+                
+                switch (node-&gt;op()) {
+                case GetById:
+                case PutById:
+                case Call:
+                case Construct:
+                    return m_out.below(
+                        m_callFrame,
+                        m_out.loadPtr(
+                            m_out.absolute(vm().addressOfFTLStackLimit())));
+                    
+                default:
+                    break;
+                }
+            }
+        }
+        
+        return m_out.booleanFalse;
+    }
+    
+    struct ArgumentsLength {
+        ArgumentsLength()
+            : isKnown(false)
+            , known(UINT_MAX)
+            , value(nullptr)
+        {
+        }
+        
+        bool isKnown;
+        unsigned known;
+        LValue value;
+    };
+    ArgumentsLength getArgumentsLength(InlineCallFrame* inlineCallFrame)
+    {
+        ArgumentsLength length;
+
+        if (inlineCallFrame &amp;&amp; !inlineCallFrame-&gt;isVarargs()) {
+            length.known = inlineCallFrame-&gt;arguments.size() - 1;
+            length.isKnown = true;
+            length.value = m_out.constInt32(length.known);
+        } else {
+            length.known = UINT_MAX;
+            length.isKnown = false;
+            
+            VirtualRegister argumentCountRegister;
+            if (!inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = inlineCallFrame-&gt;argumentCountRegister;
+            length.value = m_out.sub(m_out.load32(payloadFor(argumentCountRegister)), m_out.int32One);
+        }
+        
+        return length;
+    }
+    
+    ArgumentsLength getArgumentsLength()
+    {
+        return getArgumentsLength(m_node-&gt;origin.semantic.inlineCallFrame);
+    }
+    
+    LValue getCurrentCallee()
+    {
+        if (InlineCallFrame* frame = m_node-&gt;origin.semantic.inlineCallFrame) {
+            if (frame-&gt;isClosureCall)
+                return m_out.loadPtr(addressFor(frame-&gt;calleeRecovery.virtualRegister()));
+            return weakPointer(frame-&gt;calleeRecovery.constant().asCell());
+        }
+        return m_out.loadPtr(addressFor(JSStack::Callee));
+    }
+    
+    LValue getArgumentsStart(InlineCallFrame* inlineCallFrame)
+    {
+        VirtualRegister start = AssemblyHelpers::argumentsStart(inlineCallFrame);
+        return addressFor(start).value();
+    }
+    
+    LValue getArgumentsStart()
+    {
+        return getArgumentsStart(m_node-&gt;origin.semantic.inlineCallFrame);
+    }
+    
+    template&lt;typename Functor&gt;
+    void checkStructure(
+        LValue structureDiscriminant, const FormattedValue&amp; formattedValue, ExitKind exitKind,
+        const StructureSet&amp; set, const Functor&amp; weakStructureDiscriminant)
+    {
+        if (set.isEmpty()) {
+            terminate(exitKind);
+            return;
+        }
+
+        if (set.size() == 1) {
+            speculate(
+                exitKind, formattedValue, 0,
+                m_out.notEqual(structureDiscriminant, weakStructureDiscriminant(set[0])));
+            return;
+        }
+        
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;checkStructure continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+        for (unsigned i = 0; i &lt; set.size() - 1; ++i) {
+            LBasicBlock nextStructure = FTL_NEW_BLOCK(m_out, (&quot;checkStructure nextStructure&quot;));
+            m_out.branch(
+                m_out.equal(structureDiscriminant, weakStructureDiscriminant(set[i])),
+                unsure(continuation), unsure(nextStructure));
+            m_out.appendTo(nextStructure);
+        }
+        
+        speculate(
+            exitKind, formattedValue, 0,
+            m_out.notEqual(structureDiscriminant, weakStructureDiscriminant(set.last())));
+        
+        m_out.jump(continuation);
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    LValue numberOrNotCellToInt32(Edge edge, LValue value)
+    {
+        LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 int case&quot;));
+        LBasicBlock notIntCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 not int case&quot;));
+        LBasicBlock doubleCase = 0;
+        LBasicBlock notNumberCase = 0;
+        if (edge.useKind() == NotCellUse) {
+            doubleCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 double case&quot;));
+            notNumberCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 not number case&quot;));
+        }
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 continuation&quot;));
+        
+        Vector&lt;ValueFromBlock&gt; results;
+        
+        m_out.branch(isNotInt32(value), unsure(notIntCase), unsure(intCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(intCase, notIntCase);
+        results.append(m_out.anchor(unboxInt32(value)));
+        m_out.jump(continuation);
+        
+        if (edge.useKind() == NumberUse) {
+            m_out.appendTo(notIntCase, continuation);
+            FTL_TYPE_CHECK(jsValueValue(value), edge, SpecBytecodeNumber, isCellOrMisc(value));
+            results.append(m_out.anchor(doubleToInt32(unboxDouble(value))));
+            m_out.jump(continuation);
+        } else {
+            m_out.appendTo(notIntCase, doubleCase);
+            m_out.branch(
+                isCellOrMisc(value, provenType(edge)), unsure(notNumberCase), unsure(doubleCase));
+            
+            m_out.appendTo(doubleCase, notNumberCase);
+            results.append(m_out.anchor(doubleToInt32(unboxDouble(value))));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(notNumberCase, continuation);
+            
+            FTL_TYPE_CHECK(jsValueValue(value), edge, ~SpecCell, isCell(value));
+            
+            LValue specialResult = m_out.select(
+                m_out.equal(value, m_out.constInt64(JSValue::encode(jsBoolean(true)))),
+                m_out.int32One, m_out.int32Zero);
+            results.append(m_out.anchor(specialResult));
+            m_out.jump(continuation);
+        }
+        
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.int32, results);
+    }
+
+    void checkInferredType(Edge edge, LValue value, const InferredType::Descriptor&amp; type)
+    {
+        // This cannot use FTL_TYPE_CHECK or typeCheck() because it is called partially, as in a node like:
+        //
+        //     MultiPutByOffset(...)
+        //
+        // may be lowered to:
+        //
+        //     switch (object-&gt;structure) {
+        //     case 42:
+        //         checkInferredType(..., type1);
+        //         ...
+        //         break;
+        //     case 43:
+        //         checkInferredType(..., type2);
+        //         ...
+        //         break;
+        //     }
+        //
+        // where type1 and type2 are different. Using typeCheck() would mean that the edge would be
+        // filtered by type1 &amp; type2, instead of type1 | type2.
+        
+        switch (type.kind()) {
+        case InferredType::Bottom:
+            speculate(BadType, jsValueValue(value), edge.node(), m_out.booleanTrue);
+            return;
+
+        case InferredType::Boolean:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotBoolean(value, provenType(edge)));
+            return;
+
+        case InferredType::Other:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotOther(value, provenType(edge)));
+            return;
+
+        case InferredType::Int32:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotInt32(value, provenType(edge)));
+            return;
+
+        case InferredType::Number:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotNumber(value, provenType(edge)));
+            return;
+
+        case InferredType::String:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
+            speculate(BadType, jsValueValue(value), edge.node(), isNotString(value, provenType(edge)));
+            return;
+
+        case InferredType::Symbol:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
+            speculate(BadType, jsValueValue(value), edge.node(), isNotSymbol(value, provenType(edge)));
+            return;
+
+        case InferredType::ObjectWithStructure:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
+            if (!abstractValue(edge).m_structure.isSubsetOf(StructureSet(type.structure()))) {
+                speculate(
+                    BadType, jsValueValue(value), edge.node(),
+                    m_out.notEqual(
+                        m_out.load32(value, m_heaps.JSCell_structureID),
+                        weakStructureID(type.structure())));
+            }
+            return;
+
+        case InferredType::ObjectWithStructureOrOther: {
+            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectWithStructureOrOther cell case&quot;));
+            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectWithStructureOrOther not cell case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectWithStructureOrOther continuation&quot;));
+
+            m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase));
+
+            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
+
+            if (!abstractValue(edge).m_structure.isSubsetOf(StructureSet(type.structure()))) {
+                speculate(
+                    BadType, jsValueValue(value), edge.node(),
+                    m_out.notEqual(
+                        m_out.load32(value, m_heaps.JSCell_structureID),
+                        weakStructureID(type.structure())));
+            }
+
+            m_out.jump(continuation);
+
+            m_out.appendTo(notCellCase, continuation);
+
+            speculate(
+                BadType, jsValueValue(value), edge.node(),
+                isNotOther(value, provenType(edge) &amp; ~SpecCell));
+
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            return;
+        }
+
+        case InferredType::Object:
+            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
+            speculate(BadType, jsValueValue(value), edge.node(), isNotObject(value, provenType(edge)));
+            return;
+
+        case InferredType::ObjectOrOther: {
+            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectOrOther cell case&quot;));
+            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectOrOther not cell case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectOrOther continuation&quot;));
+
+            m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase));
+
+            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
+
+            speculate(
+                BadType, jsValueValue(value), edge.node(),
+                isNotObject(value, provenType(edge) &amp; SpecCell));
+
+            m_out.jump(continuation);
+
+            m_out.appendTo(notCellCase, continuation);
+
+            speculate(
+                BadType, jsValueValue(value), edge.node(),
+                isNotOther(value, provenType(edge) &amp; ~SpecCell));
+
+            m_out.jump(continuation);
+
+            m_out.appendTo(continuation, lastNext);
+            return;
+        }
+
+        case InferredType::Top:
+            return;
+        }
+
+        DFG_CRASH(m_graph, m_node, &quot;Bad inferred type&quot;);
+    }
+    
+    LValue loadProperty(LValue storage, unsigned identifierNumber, PropertyOffset offset)
+    {
+        return m_out.load64(addressOfProperty(storage, identifierNumber, offset));
+    }
+    
+    void storeProperty(
+        LValue value, LValue storage, unsigned identifierNumber, PropertyOffset offset)
+    {
+        m_out.store64(value, addressOfProperty(storage, identifierNumber, offset));
+    }
+    
+    TypedPointer addressOfProperty(
+        LValue storage, unsigned identifierNumber, PropertyOffset offset)
+    {
+        return m_out.address(
+            m_heaps.properties[identifierNumber], storage, offsetRelativeToBase(offset));
+    }
+    
+    LValue storageForTransition(
+        LValue object, PropertyOffset offset,
+        Structure* previousStructure, Structure* nextStructure)
+    {
+        if (isInlineOffset(offset))
+            return object;
+        
+        if (previousStructure-&gt;outOfLineCapacity() == nextStructure-&gt;outOfLineCapacity())
+            return loadButterflyWithBarrier(object);
+        
+        LValue result;
+        if (!previousStructure-&gt;outOfLineCapacity())
+            result = allocatePropertyStorage(object, previousStructure);
+        else {
+            result = reallocatePropertyStorage(
+                object, loadButterflyWithBarrier(object),
+                previousStructure, nextStructure);
+        }
+        
+        emitStoreBarrier(object);
+        
+        return result;
+    }
+    
+    LValue allocatePropertyStorage(LValue object, Structure* previousStructure)
+    {
+        if (previousStructure-&gt;couldHaveIndexingHeader()) {
+            return vmCall(
+                m_out.intPtr,
+                m_out.operation(
+                    operationReallocateButterflyToHavePropertyStorageWithInitialCapacity),
+                m_callFrame, object);
+        }
+        
+        LValue result = allocatePropertyStorageWithSizeImpl(initialOutOfLineCapacity);
+        m_out.storePtr(result, object, m_heaps.JSObject_butterfly);
+        return result;
+    }
+    
+    LValue reallocatePropertyStorage(
+        LValue object, LValue oldStorage, Structure* previous, Structure* next)
+    {
+        size_t oldSize = previous-&gt;outOfLineCapacity();
+        size_t newSize = oldSize * outOfLineGrowthFactor; 
+
+        ASSERT_UNUSED(next, newSize == next-&gt;outOfLineCapacity());
+        
+        if (previous-&gt;couldHaveIndexingHeader()) {
+            LValue newAllocSize = m_out.constIntPtr(newSize);                    
+            return vmCall(m_out.intPtr, m_out.operation(operationReallocateButterflyToGrowPropertyStorage), m_callFrame, object, newAllocSize);
+        }
+        
+        LValue result = allocatePropertyStorageWithSizeImpl(newSize);
+
+        ptrdiff_t headerSize = -sizeof(IndexingHeader) - sizeof(void*);
+        ptrdiff_t endStorage = headerSize - static_cast&lt;ptrdiff_t&gt;(oldSize * sizeof(JSValue));
+
+        for (ptrdiff_t offset = headerSize; offset &gt; endStorage; offset -= sizeof(void*)) {
+            LValue loaded = 
+                m_out.loadPtr(m_out.address(m_heaps.properties.atAnyNumber(), oldStorage, offset));
+            m_out.storePtr(loaded, m_out.address(m_heaps.properties.atAnyNumber(), result, offset));
+        }
+        
+        m_out.storePtr(result, m_out.address(object, m_heaps.JSObject_butterfly));
+        
+        return result;
+    }
+    
+    LValue allocatePropertyStorageWithSizeImpl(size_t sizeInValues)
+    {
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;allocatePropertyStorageWithSizeImpl slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;allocatePropertyStorageWithSizeImpl continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        LValue endOfStorage = allocateBasicStorageAndGetEnd(
+            m_out.constIntPtr(sizeInValues * sizeof(JSValue)), slowPath);
+        
+        ValueFromBlock fastButterfly = m_out.anchor(
+            m_out.add(m_out.constIntPtr(sizeof(IndexingHeader)), endOfStorage));
+        
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+        
+        LValue slowButterflyValue;
+        if (sizeInValues == initialOutOfLineCapacity) {
+            slowButterflyValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationAllocatePropertyStorageWithInitialCapacity,
+                        locations[0].directGPR());
+                });
+        } else {
+            slowButterflyValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationAllocatePropertyStorage, locations[0].directGPR(),
+                        CCallHelpers::TrustedImmPtr(sizeInValues));
+                });
+        }
+        ValueFromBlock slowButterfly = m_out.anchor(slowButterflyValue);
+        
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        
+        return m_out.phi(m_out.intPtr, fastButterfly, slowButterfly);
+    }
+    
+    LValue getById(LValue base)
+    {
+        Node* node = m_node;
+        UniquedStringImpl* uid = m_graph.identifiers()[node-&gt;identifierNumber()];
+
+        B3::PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+        patchpoint-&gt;appendSomeRegister(base);
+
+        // FIXME: If this is a GetByIdFlush, we might get some performance boost if we claim that it
+        // clobbers volatile registers late. It's not necessary for correctness, though, since the
+        // IC code is super smart about saving registers.
+        // https://bugs.webkit.org/show_bug.cgi?id=152848
+        
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                CallSiteIndex callSiteIndex =
+                    state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(node-&gt;origin.semantic);
+
+                // This is the direct exit target for operation calls.
+                Box&lt;CCallHelpers::JumpList&gt; exceptions =
+                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
+
+                // This is the exit for call IC's created by the getById for getters. We don't have
+                // to do anything weird other than call this, since it will associate the exit with
+                // the callsite index.
+                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
+
+                auto generator = Box&lt;JITGetByIdGenerator&gt;::create(
+                    jit.codeBlock(), node-&gt;origin.semantic, callSiteIndex,
+                    params.unavailableRegisters(), JSValueRegs(params[1].gpr()),
+                    JSValueRegs(params[0].gpr()));
+
+                generator-&gt;generateFastPath(jit);
+                CCallHelpers::Label done = jit.label();
+
+                params.addLatePath(
+                    [=] (CCallHelpers&amp; jit) {
+                        AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                        generator-&gt;slowPathJump().link(&amp;jit);
+                        CCallHelpers::Label slowPathBegin = jit.label();
+                        CCallHelpers::Call slowPathCall = callOperation(
+                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
+                            exceptions.get(), operationGetByIdOptimize, params[0].gpr(),
+                            CCallHelpers::TrustedImmPtr(generator-&gt;stubInfo()), params[1].gpr(),
+                            CCallHelpers::TrustedImmPtr(uid)).call();
+                        jit.jump().linkTo(done, &amp;jit);
+
+                        generator-&gt;reportSlowPathCall(slowPathBegin, slowPathCall);
+
+                        jit.addLinkTask(
+                            [=] (LinkBuffer&amp; linkBuffer) {
+                                generator-&gt;finalize(linkBuffer);
+                            });
+                    });
+            });
+
+        return patchpoint;
+    }
+
+    LValue loadButterflyWithBarrier(LValue object)
+    {
+        return copyBarrier(
+            object, m_out.loadPtr(object, m_heaps.JSObject_butterfly), operationGetButterfly);
+    }
+    
+    LValue loadVectorWithBarrier(LValue object)
+    {
+        LValue fastResultValue = m_out.loadPtr(object, m_heaps.JSArrayBufferView_vector);
+        return copyBarrier(
+            fastResultValue,
+            [&amp;] () -&gt; LValue {
+                LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;loadVectorWithBarrier slow path&quot;));
+                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;loadVectorWithBarrier continuation&quot;));
+
+                ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+                m_out.branch(isFastTypedArray(object), rarely(slowPath), usually(continuation));
+
+                LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
+
+                LValue slowResultValue = lazySlowPath(
+                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                        return createLazyCallGenerator(
+                            operationGetArrayBufferVector, locations[0].directGPR(),
+                            locations[1].directGPR());
+                    }, object);
+                ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+                m_out.jump(continuation);
+
+                m_out.appendTo(continuation, lastNext);
+                return m_out.phi(m_out.intPtr, fastResult, slowResult);
+            });
+    }
+
+    LValue copyBarrier(LValue object, LValue pointer, P_JITOperation_EC slowPathFunction)
+    {
+        return copyBarrier(
+            pointer,
+            [&amp;] () -&gt; LValue {
+                return lazySlowPath(
+                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                        return createLazyCallGenerator(
+                            slowPathFunction, locations[0].directGPR(), locations[1].directGPR());
+                    }, object);
+            });
+    }
+
+    template&lt;typename Functor&gt;
+    LValue copyBarrier(LValue pointer, const Functor&amp; functor)
+    {
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;copyBarrier slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;copyBarrier continuation&quot;));
+
+        ValueFromBlock fastResult = m_out.anchor(pointer);
+        m_out.branch(isInToSpace(pointer), usually(continuation), rarely(slowPath));
+
+        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
+
+        ValueFromBlock slowResult = m_out.anchor(functor());
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.intPtr, fastResult, slowResult);
+    }
+
+    LValue isInToSpace(LValue pointer)
+    {
+        return m_out.testIsZeroPtr(pointer, m_out.constIntPtr(CopyBarrierBase::spaceBits));
+    }
+
+    LValue loadButterflyReadOnly(LValue object)
+    {
+        return removeSpaceBits(m_out.loadPtr(object, m_heaps.JSObject_butterfly));
+    }
+
+    LValue loadVectorReadOnly(LValue object)
+    {
+        LValue fastResultValue = m_out.loadPtr(object, m_heaps.JSArrayBufferView_vector);
+
+        LBasicBlock possiblyFromSpace = FTL_NEW_BLOCK(m_out, (&quot;loadVectorReadOnly possibly from space&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;loadVectorReadOnly continuation&quot;));
+
+        ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+
+        m_out.branch(isInToSpace(fastResultValue), usually(continuation), rarely(possiblyFromSpace));
+
+        LBasicBlock lastNext = m_out.appendTo(possiblyFromSpace, continuation);
+
+        LValue slowResultValue = m_out.select(
+            isFastTypedArray(object), removeSpaceBits(fastResultValue), fastResultValue);
+        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        
+        return m_out.phi(m_out.intPtr, fastResult, slowResult);
+    }
+
+    LValue removeSpaceBits(LValue storage)
+    {
+        return m_out.bitAnd(
+            storage, m_out.constIntPtr(~static_cast&lt;intptr_t&gt;(CopyBarrierBase::spaceBits)));
+    }
+
+    LValue isFastTypedArray(LValue object)
+    {
+        return m_out.equal(
+            m_out.load32(object, m_heaps.JSArrayBufferView_mode),
+            m_out.constInt32(FastTypedArray));
+    }
+    
+    TypedPointer baseIndex(IndexedAbstractHeap&amp; heap, LValue storage, LValue index, Edge edge, ptrdiff_t offset = 0)
+    {
+        return m_out.baseIndex(
+            heap, storage, m_out.zeroExtPtr(index), provenValue(edge), offset);
+    }
+
+    template&lt;typename IntFunctor, typename DoubleFunctor&gt;
+    void compare(
+        const IntFunctor&amp; intFunctor, const DoubleFunctor&amp; doubleFunctor,
+        S_JITOperation_EJJ helperFunction)
+    {
+        if (m_node-&gt;isBinaryUseKind(Int32Use)) {
+            LValue left = lowInt32(m_node-&gt;child1());
+            LValue right = lowInt32(m_node-&gt;child2());
+            setBoolean(intFunctor(left, right));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(Int52RepUse)) {
+            Int52Kind kind;
+            LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
+            LValue right = lowInt52(m_node-&gt;child2(), kind);
+            setBoolean(intFunctor(left, right));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(DoubleRepUse)) {
+            LValue left = lowDouble(m_node-&gt;child1());
+            LValue right = lowDouble(m_node-&gt;child2());
+            setBoolean(doubleFunctor(left, right));
+            return;
+        }
+        
+        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
+            nonSpeculativeCompare(intFunctor, helperFunction);
+            return;
+        }
+        
+        DFG_CRASH(m_graph, m_node, &quot;Bad use kinds&quot;);
+    }
+    
+    void compareEqObjectOrOtherToObject(Edge leftChild, Edge rightChild)
+    {
+        LValue rightCell = lowCell(rightChild);
+        LValue leftValue = lowJSValue(leftChild, ManualOperandSpeculation);
+        
+        speculateTruthyObject(rightChild, rightCell, SpecObject);
+        
+        LBasicBlock leftCellCase = FTL_NEW_BLOCK(m_out, (&quot;CompareEqObjectOrOtherToObject left cell case&quot;));
+        LBasicBlock leftNotCellCase = FTL_NEW_BLOCK(m_out, (&quot;CompareEqObjectOrOtherToObject left not cell case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareEqObjectOrOtherToObject continuation&quot;));
+        
+        m_out.branch(
+            isCell(leftValue, provenType(leftChild)),
+            unsure(leftCellCase), unsure(leftNotCellCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(leftCellCase, leftNotCellCase);
+        speculateTruthyObject(leftChild, leftValue, SpecObject | (~SpecCell));
+        ValueFromBlock cellResult = m_out.anchor(m_out.equal(rightCell, leftValue));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(leftNotCellCase, continuation);
+        FTL_TYPE_CHECK(
+            jsValueValue(leftValue), leftChild, SpecOther | SpecCell, isNotOther(leftValue));
+        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(m_out.phi(m_out.boolean, cellResult, notCellResult));
+    }
+    
+    void speculateTruthyObject(Edge edge, LValue cell, SpeculatedType filter)
+    {
+        if (masqueradesAsUndefinedWatchpointIsStillValid()) {
+            FTL_TYPE_CHECK(jsValueValue(cell), edge, filter, isNotObject(cell));
+            return;
+        }
+        
+        FTL_TYPE_CHECK(jsValueValue(cell), edge, filter, isNotObject(cell));
+        speculate(
+            BadType, jsValueValue(cell), edge.node(),
+            m_out.testNonZero32(
+                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoFlags),
+                m_out.constInt32(MasqueradesAsUndefined)));
+    }
+
+    template&lt;typename IntFunctor&gt;
+    void nonSpeculativeCompare(const IntFunctor&amp; intFunctor, S_JITOperation_EJJ helperFunction)
+    {
+        LValue left = lowJSValue(m_node-&gt;child1());
+        LValue right = lowJSValue(m_node-&gt;child2());
+        
+        LBasicBlock leftIsInt = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped left is int&quot;));
+        LBasicBlock fastPath = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped fast path&quot;));
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped continuation&quot;));
+        
+        m_out.branch(isNotInt32(left), rarely(slowPath), usually(leftIsInt));
+        
+        LBasicBlock lastNext = m_out.appendTo(leftIsInt, fastPath);
+        m_out.branch(isNotInt32(right), rarely(slowPath), usually(fastPath));
+        
+        m_out.appendTo(fastPath, slowPath);
+        ValueFromBlock fastResult = m_out.anchor(intFunctor(unboxInt32(left), unboxInt32(right)));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+        ValueFromBlock slowResult = m_out.anchor(m_out.notNull(vmCall(
+            m_out.intPtr, m_out.operation(helperFunction), m_callFrame, left, right)));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        setBoolean(m_out.phi(m_out.boolean, fastResult, slowResult));
+    }
+
+    LValue stringsEqual(LValue leftJSString, LValue rightJSString)
+    {
+        LBasicBlock notTriviallyUnequalCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual not trivially unequal case&quot;));
+        LBasicBlock notEmptyCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual not empty case&quot;));
+        LBasicBlock leftReadyCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual left ready case&quot;));
+        LBasicBlock rightReadyCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual right ready case&quot;));
+        LBasicBlock left8BitCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual left 8-bit case&quot;));
+        LBasicBlock right8BitCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual right 8-bit case&quot;));
+        LBasicBlock loop = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual loop&quot;));
+        LBasicBlock bytesEqual = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual bytes equal&quot;));
+        LBasicBlock trueCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual true case&quot;));
+        LBasicBlock falseCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual false case&quot;));
+        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual slow case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual continuation&quot;));
+
+        LValue length = m_out.load32(leftJSString, m_heaps.JSString_length);
+
+        m_out.branch(
+            m_out.notEqual(length, m_out.load32(rightJSString, m_heaps.JSString_length)),
+            unsure(falseCase), unsure(notTriviallyUnequalCase));
+
+        LBasicBlock lastNext = m_out.appendTo(notTriviallyUnequalCase, notEmptyCase);
+
+        m_out.branch(m_out.isZero32(length), unsure(trueCase), unsure(notEmptyCase));
+
+        m_out.appendTo(notEmptyCase, leftReadyCase);
+
+        LValue left = m_out.loadPtr(leftJSString, m_heaps.JSString_value);
+        LValue right = m_out.loadPtr(rightJSString, m_heaps.JSString_value);
+
+        m_out.branch(m_out.notNull(left), usually(leftReadyCase), rarely(slowCase));
+
+        m_out.appendTo(leftReadyCase, rightReadyCase);
+        
+        m_out.branch(m_out.notNull(right), usually(rightReadyCase), rarely(slowCase));
+
+        m_out.appendTo(rightReadyCase, left8BitCase);
+
+        m_out.branch(
+            m_out.testIsZero32(
+                m_out.load32(left, m_heaps.StringImpl_hashAndFlags),
+                m_out.constInt32(StringImpl::flagIs8Bit())),
+            unsure(slowCase), unsure(left8BitCase));
+
+        m_out.appendTo(left8BitCase, right8BitCase);
+
+        m_out.branch(
+            m_out.testIsZero32(
+                m_out.load32(right, m_heaps.StringImpl_hashAndFlags),
+                m_out.constInt32(StringImpl::flagIs8Bit())),
+            unsure(slowCase), unsure(right8BitCase));
+
+        m_out.appendTo(right8BitCase, loop);
+
+        LValue leftData = m_out.loadPtr(left, m_heaps.StringImpl_data);
+        LValue rightData = m_out.loadPtr(right, m_heaps.StringImpl_data);
+
+        ValueFromBlock indexAtStart = m_out.anchor(length);
+
+        m_out.jump(loop);
+
+        m_out.appendTo(loop, bytesEqual);
+
+        LValue indexAtLoopTop = m_out.phi(m_out.int32, indexAtStart);
+        LValue indexInLoop = m_out.sub(indexAtLoopTop, m_out.int32One);
+
+        LValue leftByte = m_out.load8ZeroExt32(
+            m_out.baseIndex(m_heaps.characters8, leftData, m_out.zeroExtPtr(indexInLoop)));
+        LValue rightByte = m_out.load8ZeroExt32(
+            m_out.baseIndex(m_heaps.characters8, rightData, m_out.zeroExtPtr(indexInLoop)));
+
+        m_out.branch(m_out.notEqual(leftByte, rightByte), unsure(falseCase), unsure(bytesEqual));
+
+        m_out.appendTo(bytesEqual, trueCase);
+
+        ValueFromBlock indexForNextIteration = m_out.anchor(indexInLoop);
+        m_out.addIncomingToPhi(indexAtLoopTop, indexForNextIteration);
+        m_out.branch(m_out.notZero32(indexInLoop), unsure(loop), unsure(trueCase));
+
+        m_out.appendTo(trueCase, falseCase);
+        
+        ValueFromBlock trueResult = m_out.anchor(m_out.booleanTrue);
+        m_out.jump(continuation);
+
+        m_out.appendTo(falseCase, slowCase);
+
+        ValueFromBlock falseResult = m_out.anchor(m_out.booleanFalse);
+        m_out.jump(continuation);
+
+        m_out.appendTo(slowCase, continuation);
+
+        LValue slowResultValue = vmCall(
+            m_out.int64, m_out.operation(operationCompareStringEq), m_callFrame,
+            leftJSString, rightJSString);
+        ValueFromBlock slowResult = m_out.anchor(unboxBoolean(slowResultValue));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.boolean, trueResult, falseResult, slowResult);
+    }
+
+    enum ScratchFPRUsage {
+        DontNeedScratchFPR,
+        NeedScratchFPR
+    };
+    template&lt;typename BinaryArithOpGenerator, ScratchFPRUsage scratchFPRUsage = DontNeedScratchFPR&gt;
+    void emitBinarySnippet(J_JITOperation_EJJ slowPathFunction)
+    {
+        Node* node = m_node;
+        
+        LValue left = lowJSValue(node-&gt;child1());
+        LValue right = lowJSValue(node-&gt;child2());
+
+        SnippetOperand leftOperand(m_state.forNode(node-&gt;child1()).resultType());
+        SnippetOperand rightOperand(m_state.forNode(node-&gt;child2()).resultType());
+            
+        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+        patchpoint-&gt;appendSomeRegister(left);
+        patchpoint-&gt;appendSomeRegister(right);
+        patchpoint-&gt;append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
+        patchpoint-&gt;append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+        patchpoint-&gt;numGPScratchRegisters = 1;
+        patchpoint-&gt;numFPScratchRegisters = 2;
+        if (scratchFPRUsage == NeedScratchFPR)
+            patchpoint-&gt;numFPScratchRegisters++;
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                Box&lt;CCallHelpers::JumpList&gt; exceptions =
+                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
+
+                auto generator = Box&lt;BinaryArithOpGenerator&gt;::create(
+                    leftOperand, rightOperand, JSValueRegs(params[0].gpr()),
+                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()),
+                    params.fpScratch(0), params.fpScratch(1), params.gpScratch(0),
+                    scratchFPRUsage == NeedScratchFPR ? params.fpScratch(2) : InvalidFPRReg);
+
+                generator-&gt;generateFastPath(jit);
+
+                if (generator-&gt;didEmitFastPath()) {
+                    generator-&gt;endJumpList().link(&amp;jit);
+                    CCallHelpers::Label done = jit.label();
+                    
+                    params.addLatePath(
+                        [=] (CCallHelpers&amp; jit) {
+                            AllowMacroScratchRegisterUsage allowScratch(jit);
+                            
+                            generator-&gt;slowPathJumpList().link(&amp;jit);
+                            callOperation(
+                                *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
+                                exceptions.get(), slowPathFunction, params[0].gpr(),
+                                params[1].gpr(), params[2].gpr());
+                            jit.jump().linkTo(done, &amp;jit);
+                        });
+                } else {
+                    callOperation(
+                        *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
+                        exceptions.get(), slowPathFunction, params[0].gpr(), params[1].gpr(),
+                        params[2].gpr());
+                }
+            });
+
+        setJSValue(patchpoint);
+    }
+
+    template&lt;typename BinaryBitOpGenerator&gt;
+    void emitBinaryBitOpSnippet(J_JITOperation_EJJ slowPathFunction)
+    {
+        Node* node = m_node;
+        
+        // FIXME: Make this do exceptions.
+        // https://bugs.webkit.org/show_bug.cgi?id=151686
+            
+        LValue left = lowJSValue(node-&gt;child1());
+        LValue right = lowJSValue(node-&gt;child2());
+
+        SnippetOperand leftOperand(m_state.forNode(node-&gt;child1()).resultType());
+        SnippetOperand rightOperand(m_state.forNode(node-&gt;child2()).resultType());
+            
+        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+        patchpoint-&gt;appendSomeRegister(left);
+        patchpoint-&gt;appendSomeRegister(right);
+        patchpoint-&gt;append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
+        patchpoint-&gt;append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+        patchpoint-&gt;numGPScratchRegisters = 1;
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                    
+                Box&lt;CCallHelpers::JumpList&gt; exceptions =
+                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
+                    
+                auto generator = Box&lt;BinaryBitOpGenerator&gt;::create(
+                    leftOperand, rightOperand, JSValueRegs(params[0].gpr()),
+                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()), params.gpScratch(0));
+
+                generator-&gt;generateFastPath(jit);
+                generator-&gt;endJumpList().link(&amp;jit);
+                CCallHelpers::Label done = jit.label();
+
+                params.addLatePath(
+                    [=] (CCallHelpers&amp; jit) {
+                        AllowMacroScratchRegisterUsage allowScratch(jit);
+                            
+                        generator-&gt;slowPathJumpList().link(&amp;jit);
+                        callOperation(
+                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
+                            exceptions.get(), slowPathFunction, params[0].gpr(),
+                            params[1].gpr(), params[2].gpr());
+                        jit.jump().linkTo(done, &amp;jit);
+                    });
+            });
+
+        setJSValue(patchpoint);
+    }
+
+    void emitRightShiftSnippet(JITRightShiftGenerator::ShiftType shiftType)
+    {
+        Node* node = m_node;
+        
+        // FIXME: Make this do exceptions.
+        // https://bugs.webkit.org/show_bug.cgi?id=151686
+            
+        LValue left = lowJSValue(node-&gt;child1());
+        LValue right = lowJSValue(node-&gt;child2());
+
+        SnippetOperand leftOperand(m_state.forNode(node-&gt;child1()).resultType());
+        SnippetOperand rightOperand(m_state.forNode(node-&gt;child2()).resultType());
+            
+        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+        patchpoint-&gt;appendSomeRegister(left);
+        patchpoint-&gt;appendSomeRegister(right);
+        patchpoint-&gt;append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
+        patchpoint-&gt;append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(patchpoint);
+        patchpoint-&gt;numGPScratchRegisters = 1;
+        patchpoint-&gt;numFPScratchRegisters = 1;
+        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
+        State* state = &amp;m_ftlState;
+        patchpoint-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                    
+                Box&lt;CCallHelpers::JumpList&gt; exceptions =
+                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
+                    
+                auto generator = Box&lt;JITRightShiftGenerator&gt;::create(
+                    leftOperand, rightOperand, JSValueRegs(params[0].gpr()),
+                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()),
+                    params.fpScratch(0), params.gpScratch(0), InvalidFPRReg, shiftType);
+
+                generator-&gt;generateFastPath(jit);
+                generator-&gt;endJumpList().link(&amp;jit);
+                CCallHelpers::Label done = jit.label();
+
+                params.addLatePath(
+                    [=] (CCallHelpers&amp; jit) {
+                        AllowMacroScratchRegisterUsage allowScratch(jit);
+                            
+                        generator-&gt;slowPathJumpList().link(&amp;jit);
+
+                        J_JITOperation_EJJ slowPathFunction =
+                            shiftType == JITRightShiftGenerator::SignedShift
+                            ? operationValueBitRShift : operationValueBitURShift;
+                        
+                        callOperation(
+                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
+                            exceptions.get(), slowPathFunction, params[0].gpr(),
+                            params[1].gpr(), params[2].gpr());
+                        jit.jump().linkTo(done, &amp;jit);
+                    });
+            });
+
+        setJSValue(patchpoint);
+    }
+
+    LValue allocateCell(LValue allocator, LBasicBlock slowPath)
+    {
+        LBasicBlock success = FTL_NEW_BLOCK(m_out, (&quot;object allocation success&quot;));
+    
+        LValue result;
+        LValue condition;
+        if (Options::forceGCSlowPaths()) {
+            result = m_out.intPtrZero;
+            condition = m_out.booleanFalse;
+        } else {
+            result = m_out.loadPtr(
+                allocator, m_heaps.MarkedAllocator_freeListHead);
+            condition = m_out.notNull(result);
+        }
+        m_out.branch(condition, usually(success), rarely(slowPath));
+        
+        m_out.appendTo(success);
+        
+        m_out.storePtr(
+            m_out.loadPtr(result, m_heaps.JSCell_freeListNext),
+            allocator, m_heaps.MarkedAllocator_freeListHead);
+
+        return result;
+    }
+    
+    void storeStructure(LValue object, Structure* structure)
+    {
+        m_out.store32(m_out.constInt32(structure-&gt;id()), object, m_heaps.JSCell_structureID);
+        m_out.store32(
+            m_out.constInt32(structure-&gt;objectInitializationBlob()),
+            object, m_heaps.JSCell_usefulBytes);
+    }
+
+    LValue allocateCell(LValue allocator, Structure* structure, LBasicBlock slowPath)
+    {
+        LValue result = allocateCell(allocator, slowPath);
+        storeStructure(result, structure);
+        return result;
+    }
+
+    LValue allocateObject(
+        LValue allocator, Structure* structure, LValue butterfly, LBasicBlock slowPath)
+    {
+        LValue result = allocateCell(allocator, structure, slowPath);
+        m_out.storePtr(butterfly, result, m_heaps.JSObject_butterfly);
+        return result;
+    }
+    
+    template&lt;typename ClassType&gt;
+    LValue allocateObject(
+        size_t size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
+    {
+        MarkedAllocator* allocator = &amp;vm().heap.allocatorForObjectOfType&lt;ClassType&gt;(size);
+        return allocateObject(m_out.constIntPtr(allocator), structure, butterfly, slowPath);
+    }
+    
+    template&lt;typename ClassType&gt;
+    LValue allocateObject(Structure* structure, LValue butterfly, LBasicBlock slowPath)
+    {
+        return allocateObject&lt;ClassType&gt;(
+            ClassType::allocationSize(0), structure, butterfly, slowPath);
+    }
+    
+    template&lt;typename ClassType&gt;
+    LValue allocateVariableSizedObject(
+        LValue size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
+    {
+        static_assert(!(MarkedSpace::preciseStep &amp; (MarkedSpace::preciseStep - 1)), &quot;MarkedSpace::preciseStep must be a power of two.&quot;);
+        static_assert(!(MarkedSpace::impreciseStep &amp; (MarkedSpace::impreciseStep - 1)), &quot;MarkedSpace::impreciseStep must be a power of two.&quot;);
+
+        LValue subspace = m_out.constIntPtr(&amp;vm().heap.subspaceForObjectOfType&lt;ClassType&gt;());
+        
+        LBasicBlock smallCaseBlock = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject small case&quot;));
+        LBasicBlock largeOrOversizeCaseBlock = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject large or oversize case&quot;));
+        LBasicBlock largeCaseBlock = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject large case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject continuation&quot;));
+        
+        LValue uproundedSize = m_out.add(size, m_out.constInt32(MarkedSpace::preciseStep - 1));
+        LValue isSmall = m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::preciseCutoff));
+        m_out.branch(isSmall, unsure(smallCaseBlock), unsure(largeOrOversizeCaseBlock));
+        
+        LBasicBlock lastNext = m_out.appendTo(smallCaseBlock, largeOrOversizeCaseBlock);
+        TypedPointer address = m_out.baseIndex(
+            m_heaps.MarkedSpace_Subspace_preciseAllocators, subspace,
+            m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::preciseStep)))));
+        ValueFromBlock smallAllocator = m_out.anchor(address.value());
+        m_out.jump(continuation);
+        
+        m_out.appendTo(largeOrOversizeCaseBlock, largeCaseBlock);
+        m_out.branch(
+            m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::impreciseCutoff)),
+            usually(largeCaseBlock), rarely(slowPath));
+        
+        m_out.appendTo(largeCaseBlock, continuation);
+        address = m_out.baseIndex(
+            m_heaps.MarkedSpace_Subspace_impreciseAllocators, subspace,
+            m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::impreciseStep)))));
+        ValueFromBlock largeAllocator = m_out.anchor(address.value());
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        LValue allocator = m_out.phi(m_out.intPtr, smallAllocator, largeAllocator);
+        
+        return allocateObject(allocator, structure, butterfly, slowPath);
+    }
+    
+    // Returns a pointer to the end of the allocation.
+    LValue allocateBasicStorageAndGetEnd(LValue size, LBasicBlock slowPath)
+    {
+        CopiedAllocator&amp; allocator = vm().heap.storageAllocator();
+        
+        LBasicBlock success = FTL_NEW_BLOCK(m_out, (&quot;storage allocation success&quot;));
+        
+        LValue remaining = m_out.loadPtr(m_out.absolute(&amp;allocator.m_currentRemaining));
+        LValue newRemaining = m_out.sub(remaining, size);
+        
+        m_out.branch(
+            m_out.lessThan(newRemaining, m_out.intPtrZero),
+            rarely(slowPath), usually(success));
+        
+        m_out.appendTo(success);
+        
+        m_out.storePtr(newRemaining, m_out.absolute(&amp;allocator.m_currentRemaining));
+        return m_out.sub(
+            m_out.loadPtr(m_out.absolute(&amp;allocator.m_currentPayloadEnd)), newRemaining);
+    }
+
+    LValue allocateBasicStorage(LValue size, LBasicBlock slowPath)
+    {
+        return m_out.sub(allocateBasicStorageAndGetEnd(size, slowPath), size);
+    }
+    
+    LValue allocateObject(Structure* structure)
+    {
+        size_t allocationSize = JSFinalObject::allocationSize(structure-&gt;inlineCapacity());
+        MarkedAllocator* allocator = &amp;vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+        
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;allocateObject slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;allocateObject continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        ValueFromBlock fastResult = m_out.anchor(allocateObject(
+            m_out.constIntPtr(allocator), structure, m_out.intPtrZero, slowPath));
+        
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+
+        LValue slowResultValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationNewObject, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure));
+            });
+        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.intPtr, fastResult, slowResult);
+    }
+    
+    struct ArrayValues {
+        ArrayValues()
+            : array(0)
+            , butterfly(0)
+        {
+        }
+        
+        ArrayValues(LValue array, LValue butterfly)
+            : array(array)
+            , butterfly(butterfly)
+        {
+        }
+        
+        LValue array;
+        LValue butterfly;
+    };
+    ArrayValues allocateJSArray(
+        Structure* structure, unsigned numElements, LBasicBlock slowPath)
+    {
+        ASSERT(
+            hasUndecided(structure-&gt;indexingType())
+            || hasInt32(structure-&gt;indexingType())
+            || hasDouble(structure-&gt;indexingType())
+            || hasContiguous(structure-&gt;indexingType()));
+        
+        unsigned vectorLength = std::max(BASE_VECTOR_LEN, numElements);
+        
+        LValue endOfStorage = allocateBasicStorageAndGetEnd(
+            m_out.constIntPtr(sizeof(JSValue) * vectorLength + sizeof(IndexingHeader)),
+            slowPath);
+        
+        LValue butterfly = m_out.sub(
+            endOfStorage, m_out.constIntPtr(sizeof(JSValue) * vectorLength));
+        
+        LValue object = allocateObject&lt;JSArray&gt;(
+            structure, butterfly, slowPath);
+        
+        m_out.store32(m_out.constInt32(numElements), butterfly, m_heaps.Butterfly_publicLength);
+        m_out.store32(m_out.constInt32(vectorLength), butterfly, m_heaps.Butterfly_vectorLength);
+        
+        if (hasDouble(structure-&gt;indexingType())) {
+            for (unsigned i = numElements; i &lt; vectorLength; ++i) {
+                m_out.store64(
+                    m_out.constInt64(bitwise_cast&lt;int64_t&gt;(PNaN)),
+                    butterfly, m_heaps.indexedDoubleProperties[i]);
+            }
+        }
+        
+        return ArrayValues(object, butterfly);
+    }
+    
+    ArrayValues allocateJSArray(Structure* structure, unsigned numElements)
+    {
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;JSArray allocation slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;JSArray allocation continuation&quot;));
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+        
+        ArrayValues fastValues = allocateJSArray(structure, numElements, slowPath);
+        ValueFromBlock fastArray = m_out.anchor(fastValues.array);
+        ValueFromBlock fastButterfly = m_out.anchor(fastValues.butterfly);
+        
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+
+        LValue slowArrayValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationNewArrayWithSize, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), CCallHelpers::TrustedImm32(numElements));
+            });
+        ValueFromBlock slowArray = m_out.anchor(slowArrayValue);
+        ValueFromBlock slowButterfly = m_out.anchor(
+            m_out.loadPtr(slowArrayValue, m_heaps.JSObject_butterfly));
+
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        
+        return ArrayValues(
+            m_out.phi(m_out.intPtr, fastArray, slowArray),
+            m_out.phi(m_out.intPtr, fastButterfly, slowButterfly));
+    }
+    
+    LValue boolify(Edge edge)
+    {
+        switch (edge.useKind()) {
+        case BooleanUse:
+        case KnownBooleanUse:
+            return lowBoolean(edge);
+        case Int32Use:
+            return m_out.notZero32(lowInt32(edge));
+        case DoubleRepUse:
+            return m_out.doubleNotEqualAndOrdered(lowDouble(edge), m_out.doubleZero);
+        case ObjectOrOtherUse:
+            return m_out.logicalNot(
+                equalNullOrUndefined(
+                    edge, CellCaseSpeculatesObject, SpeculateNullOrUndefined,
+                    ManualOperandSpeculation));
+        case StringUse: {
+            LValue stringValue = lowString(edge);
+            LValue length = m_out.load32NonNegative(stringValue, m_heaps.JSString_length);
+            return m_out.notEqual(length, m_out.int32Zero);
+        }
+        case UntypedUse: {
+            LValue value = lowJSValue(edge);
+            
+            // Implements the following control flow structure:
+            // if (value is cell) {
+            //     if (value is string)
+            //         result = !!value-&gt;length
+            //     else {
+            //         do evil things for masquerades-as-undefined
+            //         result = true
+            //     }
+            // } else if (value is int32) {
+            //     result = !!unboxInt32(value)
+            // } else if (value is number) {
+            //     result = !!unboxDouble(value)
+            // } else {
+            //     result = value == jsTrue
+            // }
+            
+            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped cell case&quot;));
+            LBasicBlock stringCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped string case&quot;));
+            LBasicBlock notStringCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not string case&quot;));
+            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not cell case&quot;));
+            LBasicBlock int32Case = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped int32 case&quot;));
+            LBasicBlock notInt32Case = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not int32 case&quot;));
+            LBasicBlock doubleCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped double case&quot;));
+            LBasicBlock notDoubleCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not double case&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped continuation&quot;));
+            
+            Vector&lt;ValueFromBlock&gt; results;
+            
+            m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase));
+            
+            LBasicBlock lastNext = m_out.appendTo(cellCase, stringCase);
+            m_out.branch(
+                isString(value, provenType(edge) &amp; SpecCell),
+                unsure(stringCase), unsure(notStringCase));
+            
+            m_out.appendTo(stringCase, notStringCase);
+            LValue nonEmptyString = m_out.notZero32(
+                m_out.load32NonNegative(value, m_heaps.JSString_length));
+            results.append(m_out.anchor(nonEmptyString));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(notStringCase, notCellCase);
+            LValue isTruthyObject;
+            if (masqueradesAsUndefinedWatchpointIsStillValid())
+                isTruthyObject = m_out.booleanTrue;
+            else {
+                LBasicBlock masqueradesCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped masquerades case&quot;));
+                
+                results.append(m_out.anchor(m_out.booleanTrue));
+                
+                m_out.branch(
+                    m_out.testIsZero32(
+                        m_out.load8ZeroExt32(value, m_heaps.JSCell_typeInfoFlags),
+                        m_out.constInt32(MasqueradesAsUndefined)),
+                    usually(continuation), rarely(masqueradesCase));
+                
+                m_out.appendTo(masqueradesCase);
+                
+                isTruthyObject = m_out.notEqual(
+                    m_out.constIntPtr(m_graph.globalObjectFor(m_node-&gt;origin.semantic)),
+                    m_out.loadPtr(loadStructure(value), m_heaps.Structure_globalObject));
+            }
+            results.append(m_out.anchor(isTruthyObject));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(notCellCase, int32Case);
+            m_out.branch(
+                isInt32(value, provenType(edge) &amp; ~SpecCell),
+                unsure(int32Case), unsure(notInt32Case));
+            
+            m_out.appendTo(int32Case, notInt32Case);
+            results.append(m_out.anchor(m_out.notZero32(unboxInt32(value))));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(notInt32Case, doubleCase);
+            m_out.branch(
+                isNumber(value, provenType(edge) &amp; ~SpecCell),
+                unsure(doubleCase), unsure(notDoubleCase));
+            
+            m_out.appendTo(doubleCase, notDoubleCase);
+            LValue doubleIsTruthy = m_out.doubleNotEqualAndOrdered(
+                unboxDouble(value), m_out.constDouble(0));
+            results.append(m_out.anchor(doubleIsTruthy));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(notDoubleCase, continuation);
+            LValue miscIsTruthy = m_out.equal(
+                value, m_out.constInt64(JSValue::encode(jsBoolean(true))));
+            results.append(m_out.anchor(miscIsTruthy));
+            m_out.jump(continuation);
+            
+            m_out.appendTo(continuation, lastNext);
+            return m_out.phi(m_out.boolean, results);
+        }
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+            return 0;
+        }
+    }
+    
+    enum StringOrObjectMode {
+        AllCellsAreFalse,
+        CellCaseSpeculatesObject
+    };
+    enum EqualNullOrUndefinedMode {
+        EqualNull,
+        EqualUndefined,
+        EqualNullOrUndefined,
+        SpeculateNullOrUndefined
+    };
+    LValue equalNullOrUndefined(
+        Edge edge, StringOrObjectMode cellMode, EqualNullOrUndefinedMode primitiveMode,
+        OperandSpeculationMode operandMode = AutomaticOperandSpeculation)
+    {
+        bool validWatchpoint = masqueradesAsUndefinedWatchpointIsStillValid();
+        
+        LValue value = lowJSValue(edge, operandMode);
+        
+        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined cell case&quot;));
+        LBasicBlock primitiveCase = FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined primitive case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined continuation&quot;));
+        
+        m_out.branch(isNotCell(value, provenType(edge)), unsure(primitiveCase), unsure(cellCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(cellCase, primitiveCase);
+        
+        Vector&lt;ValueFromBlock, 3&gt; results;
+        
+        switch (cellMode) {
+        case AllCellsAreFalse:
+            break;
+        case CellCaseSpeculatesObject:
+            FTL_TYPE_CHECK(
+                jsValueValue(value), edge, (~SpecCell) | SpecObject, isNotObject(value));
+            break;
+        }
+        
+        if (validWatchpoint) {
+            results.append(m_out.anchor(m_out.booleanFalse));
+            m_out.jump(continuation);
+        } else {
+            LBasicBlock masqueradesCase =
+                FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined masquerades case&quot;));
+                
+            results.append(m_out.anchor(m_out.booleanFalse));
+            
+            m_out.branch(
+                m_out.testNonZero32(
+                    m_out.load8ZeroExt32(value, m_heaps.JSCell_typeInfoFlags),
+                    m_out.constInt32(MasqueradesAsUndefined)),
+                rarely(masqueradesCase), usually(continuation));
+            
+            m_out.appendTo(masqueradesCase, primitiveCase);
+            
+            LValue structure = loadStructure(value);
+            
+            results.append(m_out.anchor(
+                m_out.equal(
+                    m_out.constIntPtr(m_graph.globalObjectFor(m_node-&gt;origin.semantic)),
+                    m_out.loadPtr(structure, m_heaps.Structure_globalObject))));
+            m_out.jump(continuation);
+        }
+        
+        m_out.appendTo(primitiveCase, continuation);
+        
+        LValue primitiveResult;
+        switch (primitiveMode) {
+        case EqualNull:
+            primitiveResult = m_out.equal(value, m_out.constInt64(ValueNull));
+            break;
+        case EqualUndefined:
+            primitiveResult = m_out.equal(value, m_out.constInt64(ValueUndefined));
+            break;
+        case EqualNullOrUndefined:
+            primitiveResult = isOther(value, provenType(edge));
+            break;
+        case SpeculateNullOrUndefined:
+            FTL_TYPE_CHECK(
+                jsValueValue(value), edge, SpecCell | SpecOther, isNotOther(value));
+            primitiveResult = m_out.booleanTrue;
+            break;
+        }
+        results.append(m_out.anchor(primitiveResult));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        
+        return m_out.phi(m_out.boolean, results);
+    }
+    
+    template&lt;typename FunctionType&gt;
+    void contiguousPutByValOutOfBounds(
+        FunctionType slowPathFunction, LValue base, LValue storage, LValue index, LValue value,
+        LBasicBlock continuation)
+    {
+        LValue isNotInBounds = m_out.aboveOrEqual(
+            index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength));
+        if (!m_node-&gt;arrayMode().isInBounds()) {
+            LBasicBlock notInBoundsCase =
+                FTL_NEW_BLOCK(m_out, (&quot;PutByVal not in bounds&quot;));
+            LBasicBlock performStore =
+                FTL_NEW_BLOCK(m_out, (&quot;PutByVal perform store&quot;));
+                
+            m_out.branch(isNotInBounds, unsure(notInBoundsCase), unsure(performStore));
+                
+            LBasicBlock lastNext = m_out.appendTo(notInBoundsCase, performStore);
+                
+            LValue isOutOfBounds = m_out.aboveOrEqual(
+                index, m_out.load32NonNegative(storage, m_heaps.Butterfly_vectorLength));
+                
+            if (!m_node-&gt;arrayMode().isOutOfBounds())
+                speculate(OutOfBounds, noValue(), 0, isOutOfBounds);
+            else {
+                LBasicBlock outOfBoundsCase =
+                    FTL_NEW_BLOCK(m_out, (&quot;PutByVal out of bounds&quot;));
+                LBasicBlock holeCase =
+                    FTL_NEW_BLOCK(m_out, (&quot;PutByVal hole case&quot;));
+                    
+                m_out.branch(isOutOfBounds, rarely(outOfBoundsCase), usually(holeCase));
+                    
+                LBasicBlock innerLastNext = m_out.appendTo(outOfBoundsCase, holeCase);
+                    
+                vmCall(
+                    m_out.voidType, m_out.operation(slowPathFunction),
+                    m_callFrame, base, index, value);
+                    
+                m_out.jump(continuation);
+                    
+                m_out.appendTo(holeCase, innerLastNext);
+            }
+            
+            m_out.store32(
+                m_out.add(index, m_out.int32One),
+                storage, m_heaps.Butterfly_publicLength);
+                
+            m_out.jump(performStore);
+            m_out.appendTo(performStore, lastNext);
+        }
+    }
+    
+    void buildSwitch(SwitchData* data, LType type, LValue switchValue)
+    {
+        ASSERT(type == m_out.intPtr || type == m_out.int32);
+
+        Vector&lt;SwitchCase&gt; cases;
+        for (unsigned i = 0; i &lt; data-&gt;cases.size(); ++i) {
+            SwitchCase newCase;
+
+            if (type == m_out.intPtr) {
+                newCase = SwitchCase(m_out.constIntPtr(data-&gt;cases[i].value.switchLookupValue(data-&gt;kind)),
+                    lowBlock(data-&gt;cases[i].target.block), Weight(data-&gt;cases[i].target.count));
+            } else if (type == m_out.int32) {
+                newCase = SwitchCase(m_out.constInt32(data-&gt;cases[i].value.switchLookupValue(data-&gt;kind)),
+                    lowBlock(data-&gt;cases[i].target.block), Weight(data-&gt;cases[i].target.count));
+            } else
+                CRASH();
+
+            cases.append(newCase);
+        }
+        
+        m_out.switchInstruction(
+            switchValue, cases,
+            lowBlock(data-&gt;fallThrough.block), Weight(data-&gt;fallThrough.count));
+    }
+    
+    void switchString(SwitchData* data, LValue string)
+    {
+        bool canDoBinarySwitch = true;
+        unsigned totalLength = 0;
+        
+        for (DFG::SwitchCase myCase : data-&gt;cases) {
+            StringImpl* string = myCase.value.stringImpl();
+            if (!string-&gt;is8Bit()) {
+                canDoBinarySwitch = false;
+                break;
+            }
+            if (string-&gt;length() &gt; Options::maximumBinaryStringSwitchCaseLength()) {
+                canDoBinarySwitch = false;
+                break;
+            }
+            totalLength += string-&gt;length();
+        }
+        
+        if (!canDoBinarySwitch || totalLength &gt; Options::maximumBinaryStringSwitchTotalLength()) {
+            switchStringSlow(data, string);
+            return;
+        }
+        
+        LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value);
+        LValue length = m_out.load32(string, m_heaps.JSString_length);
+        
+        LBasicBlock hasImplBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString has impl case&quot;));
+        LBasicBlock is8BitBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString is 8 bit case&quot;));
+        LBasicBlock slowBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString slow case&quot;));
+        
+        m_out.branch(m_out.isNull(stringImpl), unsure(slowBlock), unsure(hasImplBlock));
+        
+        LBasicBlock lastNext = m_out.appendTo(hasImplBlock, is8BitBlock);
+        
+        m_out.branch(
+            m_out.testIsZero32(
+                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
+                m_out.constInt32(StringImpl::flagIs8Bit())),
+            unsure(slowBlock), unsure(is8BitBlock));
+        
+        m_out.appendTo(is8BitBlock, slowBlock);
+        
+        LValue buffer = m_out.loadPtr(stringImpl, m_heaps.StringImpl_data);
+        
+        // FIXME: We should propagate branch weight data to the cases of this switch.
+        // https://bugs.webkit.org/show_bug.cgi?id=144368
+        
+        Vector&lt;StringSwitchCase&gt; cases;
+        for (DFG::SwitchCase myCase : data-&gt;cases)
+            cases.append(StringSwitchCase(myCase.value.stringImpl(), lowBlock(myCase.target.block)));
+        std::sort(cases.begin(), cases.end());
+        switchStringRecurse(data, buffer, length, cases, 0, 0, cases.size(), 0, false);
+
+        m_out.appendTo(slowBlock, lastNext);
+        switchStringSlow(data, string);
+    }
+    
+    // The code for string switching is based closely on the same code in the DFG backend. While it
+    // would be nice to reduce the amount of similar-looking code, it seems like this is one of
+    // those algorithms where factoring out the common bits would result in more code than just
+    // duplicating.
+    
+    struct StringSwitchCase {
+        StringSwitchCase() { }
+        
+        StringSwitchCase(StringImpl* string, LBasicBlock target)
+            : string(string)
+            , target(target)
+        {
+        }
+
+        bool operator&lt;(const StringSwitchCase&amp; other) const
+        {
+            return stringLessThan(*string, *other.string);
+        }
+        
+        StringImpl* string;
+        LBasicBlock target;
+    };
+    
+    struct CharacterCase {
+        CharacterCase()
+            : character(0)
+            , begin(0)
+            , end(0)
+        {
+        }
+        
+        CharacterCase(LChar character, unsigned begin, unsigned end)
+            : character(character)
+            , begin(begin)
+            , end(end)
+        {
+        }
+        
+        bool operator&lt;(const CharacterCase&amp; other) const
+        {
+            return character &lt; other.character;
+        }
+        
+        LChar character;
+        unsigned begin;
+        unsigned end;
+    };
+    
+    void switchStringRecurse(
+        SwitchData* data, LValue buffer, LValue length, const Vector&lt;StringSwitchCase&gt;&amp; cases,
+        unsigned numChecked, unsigned begin, unsigned end, unsigned alreadyCheckedLength,
+        unsigned checkedExactLength)
+    {
+        LBasicBlock fallThrough = lowBlock(data-&gt;fallThrough.block);
+        
+        if (begin == end) {
+            m_out.jump(fallThrough);
+            return;
+        }
+        
+        unsigned minLength = cases[begin].string-&gt;length();
+        unsigned commonChars = minLength;
+        bool allLengthsEqual = true;
+        for (unsigned i = begin + 1; i &lt; end; ++i) {
+            unsigned myCommonChars = numChecked;
+            unsigned limit = std::min(cases[begin].string-&gt;length(), cases[i].string-&gt;length());
+            for (unsigned j = numChecked; j &lt; limit; ++j) {
+                if (cases[begin].string-&gt;at(j) != cases[i].string-&gt;at(j))
+                    break;
+                myCommonChars++;
+            }
+            commonChars = std::min(commonChars, myCommonChars);
+            if (minLength != cases[i].string-&gt;length())
+                allLengthsEqual = false;
+            minLength = std::min(minLength, cases[i].string-&gt;length());
+        }
+        
+        if (checkedExactLength) {
+            DFG_ASSERT(m_graph, m_node, alreadyCheckedLength == minLength);
+            DFG_ASSERT(m_graph, m_node, allLengthsEqual);
+        }
+        
+        DFG_ASSERT(m_graph, m_node, minLength &gt;= commonChars);
+        
+        if (!allLengthsEqual &amp;&amp; alreadyCheckedLength &lt; minLength)
+            m_out.check(m_out.below(length, m_out.constInt32(minLength)), unsure(fallThrough));
+        if (allLengthsEqual &amp;&amp; (alreadyCheckedLength &lt; minLength || !checkedExactLength))
+            m_out.check(m_out.notEqual(length, m_out.constInt32(minLength)), unsure(fallThrough));
+        
+        for (unsigned i = numChecked; i &lt; commonChars; ++i) {
+            m_out.check(
+                m_out.notEqual(
+                    m_out.load8ZeroExt32(buffer, m_heaps.characters8[i]),
+                    m_out.constInt32(static_cast&lt;uint16_t&gt;(cases[begin].string-&gt;at(i)))),
+                unsure(fallThrough));
+        }
+        
+        if (minLength == commonChars) {
+            // This is the case where one of the cases is a prefix of all of the other cases.
+            // We've already checked that the input string is a prefix of all of the cases,
+            // so we just check length to jump to that case.
+            
+            DFG_ASSERT(m_graph, m_node, cases[begin].string-&gt;length() == commonChars);
+            for (unsigned i = begin + 1; i &lt; end; ++i)
+                DFG_ASSERT(m_graph, m_node, cases[i].string-&gt;length() &gt; commonChars);
+            
+            if (allLengthsEqual) {
+                DFG_ASSERT(m_graph, m_node, end == begin + 1);
+                m_out.jump(cases[begin].target);
+                return;
+            }
+            
+            m_out.check(
+                m_out.equal(length, m_out.constInt32(commonChars)),
+                unsure(cases[begin].target));
+            
+            // We've checked if the length is &gt;= minLength, and then we checked if the length is
+            // == commonChars. We get to this point if it is &gt;= minLength but not == commonChars.
+            // Hence we know that it now must be &gt; minLength, i.e. that it's &gt;= minLength + 1.
+            switchStringRecurse(
+                data, buffer, length, cases, commonChars, begin + 1, end, minLength + 1, false);
+            return;
+        }
+        
+        // At this point we know that the string is longer than commonChars, and we've only verified
+        // commonChars. Use a binary switch on the next unchecked character, i.e.
+        // string[commonChars].
+        
+        DFG_ASSERT(m_graph, m_node, end &gt;= begin + 2);
+        
+        LValue uncheckedChar = m_out.load8ZeroExt32(buffer, m_heaps.characters8[commonChars]);
+        
+        Vector&lt;CharacterCase&gt; characterCases;
+        CharacterCase currentCase(cases[begin].string-&gt;at(commonChars), begin, begin + 1);
+        for (unsigned i = begin + 1; i &lt; end; ++i) {
+            LChar currentChar = cases[i].string-&gt;at(commonChars);
+            if (currentChar != currentCase.character) {
+                currentCase.end = i;
+                characterCases.append(currentCase);
+                currentCase = CharacterCase(currentChar, i, i + 1);
+            } else
+                currentCase.end = i + 1;
+        }
+        characterCases.append(currentCase);
+        
+        Vector&lt;LBasicBlock&gt; characterBlocks;
+        for (CharacterCase&amp; myCase : characterCases)
+            characterBlocks.append(FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString case for &quot;, myCase.character, &quot; at index &quot;, commonChars)));
+        
+        Vector&lt;SwitchCase&gt; switchCases;
+        for (unsigned i = 0; i &lt; characterCases.size(); ++i) {
+            if (i)
+                DFG_ASSERT(m_graph, m_node, characterCases[i - 1].character &lt; characterCases[i].character);
+            switchCases.append(SwitchCase(
+                m_out.constInt32(characterCases[i].character), characterBlocks[i], Weight()));
+        }
+        m_out.switchInstruction(uncheckedChar, switchCases, fallThrough, Weight());
+        
+        LBasicBlock lastNext = m_out.m_nextBlock;
+        characterBlocks.append(lastNext); // Makes it convenient to set nextBlock.
+        for (unsigned i = 0; i &lt; characterCases.size(); ++i) {
+            m_out.appendTo(characterBlocks[i], characterBlocks[i + 1]);
+            switchStringRecurse(
+                data, buffer, length, cases, commonChars + 1,
+                characterCases[i].begin, characterCases[i].end, minLength, allLengthsEqual);
+        }
+        
+        DFG_ASSERT(m_graph, m_node, m_out.m_nextBlock == lastNext);
+    }
+    
+    void switchStringSlow(SwitchData* data, LValue string)
+    {
+        // FIXME: We ought to be able to use computed gotos here. We would save the labels of the
+        // blocks we want to jump to, and then request their addresses after compilation completes.
+        // https://bugs.webkit.org/show_bug.cgi?id=144369
+        
+        LValue branchOffset = vmCall(
+            m_out.int32, m_out.operation(operationSwitchStringAndGetBranchOffset),
+            m_callFrame, m_out.constIntPtr(data-&gt;switchTableIndex), string);
+        
+        StringJumpTable&amp; table = codeBlock()-&gt;stringSwitchJumpTable(data-&gt;switchTableIndex);
+        
+        Vector&lt;SwitchCase&gt; cases;
+        std::unordered_set&lt;int32_t&gt; alreadyHandled; // These may be negative, or zero, or probably other stuff, too. We don't want to mess with HashSet's corner cases and we don't really care about throughput here.
+        for (unsigned i = 0; i &lt; data-&gt;cases.size(); ++i) {
+            // FIXME: The fact that we're using the bytecode's switch table means that the
+            // following DFG IR transformation would be invalid.
+            //
+            // Original code:
+            //     switch (v) {
+            //     case &quot;foo&quot;:
+            //     case &quot;bar&quot;:
+            //         things();
+            //         break;
+            //     default:
+            //         break;
+            //     }
+            //
+            // New code:
+            //     switch (v) {
+            //     case &quot;foo&quot;:
+            //         instrumentFoo();
+            //         goto _things;
+            //     case &quot;bar&quot;:
+            //         instrumentBar();
+            //     _things:
+            //         things();
+            //         break;
+            //     default:
+            //         break;
+            //     }
+            //
+            // Luckily, we don't currently do any such transformation. But it's kind of silly that
+            // this is an issue.
+            // https://bugs.webkit.org/show_bug.cgi?id=144635
+            
+            DFG::SwitchCase myCase = data-&gt;cases[i];
+            StringJumpTable::StringOffsetTable::iterator iter =
+                table.offsetTable.find(myCase.value.stringImpl());
+            DFG_ASSERT(m_graph, m_node, iter != table.offsetTable.end());
+            
+            if (!alreadyHandled.insert(iter-&gt;value.branchOffset).second)
+                continue;
+
+            cases.append(SwitchCase(
+                m_out.constInt32(iter-&gt;value.branchOffset),
+                lowBlock(myCase.target.block), Weight(myCase.target.count)));
+        }
+        
+        m_out.switchInstruction(
+            branchOffset, cases, lowBlock(data-&gt;fallThrough.block),
+            Weight(data-&gt;fallThrough.count));
+    }
+    
+    // Calls the functor at the point of code generation where we know what the result type is.
+    // You can emit whatever code you like at that point. Expects you to terminate the basic block.
+    // When buildTypeOf() returns, it will have terminated all basic blocks that it created. So, if
+    // you aren't using this as the terminator of a high-level block, you should create your own
+    // contination and set it as the nextBlock (m_out.insertNewBlocksBefore(continuation)) before
+    // calling this. For example:
+    //
+    // LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;My continuation&quot;));
+    // LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+    // buildTypeOf(
+    //     child, value,
+    //     [&amp;] (TypeofType type) {
+    //          do things;
+    //          m_out.jump(continuation);
+    //     });
+    // m_out.appendTo(continuation, lastNext);
+    template&lt;typename Functor&gt;
+    void buildTypeOf(Edge child, LValue value, const Functor&amp; functor)
+    {
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
+        
+        // Implements the following branching structure:
+        //
+        // if (is cell) {
+        //     if (is object) {
+        //         if (is function) {
+        //             return function;
+        //         } else if (doesn't have call trap and doesn't masquerade as undefined) {
+        //             return object
+        //         } else {
+        //             return slowPath();
+        //         }
+        //     } else if (is string) {
+        //         return string
+        //     } else {
+        //         return symbol
+        //     }
+        // } else if (is number) {
+        //     return number
+        // } else if (is null) {
+        //     return object
+        // } else if (is boolean) {
+        //     return boolean
+        // } else {
+        //     return undefined
+        // }
+        
+        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf cell case&quot;));
+        LBasicBlock objectCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf object case&quot;));
+        LBasicBlock functionCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf function case&quot;));
+        LBasicBlock notFunctionCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not function case&quot;));
+        LBasicBlock reallyObjectCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf really object case&quot;));
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf slow path&quot;));
+        LBasicBlock unreachable = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf unreachable&quot;));
+        LBasicBlock notObjectCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not object case&quot;));
+        LBasicBlock stringCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf string case&quot;));
+        LBasicBlock symbolCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf symbol case&quot;));
+        LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not cell case&quot;));
+        LBasicBlock numberCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf number case&quot;));
+        LBasicBlock notNumberCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not number case&quot;));
+        LBasicBlock notNullCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not null case&quot;));
+        LBasicBlock booleanCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf boolean case&quot;));
+        LBasicBlock undefinedCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf undefined case&quot;));
+        
+        m_out.branch(isCell(value, provenType(child)), unsure(cellCase), unsure(notCellCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(cellCase, objectCase);
+        m_out.branch(isObject(value, provenType(child)), unsure(objectCase), unsure(notObjectCase));
+        
+        m_out.appendTo(objectCase, functionCase);
+        m_out.branch(
+            isFunction(value, provenType(child) &amp; SpecObject),
+            unsure(functionCase), unsure(notFunctionCase));
+        
+        m_out.appendTo(functionCase, notFunctionCase);
+        functor(TypeofType::Function);
+        
+        m_out.appendTo(notFunctionCase, reallyObjectCase);
+        m_out.branch(
+            isExoticForTypeof(value, provenType(child) &amp; (SpecObject - SpecFunction)),
+            rarely(slowPath), usually(reallyObjectCase));
+        
+        m_out.appendTo(reallyObjectCase, slowPath);
+        functor(TypeofType::Object);
+        
+        m_out.appendTo(slowPath, unreachable);
+        LValue result = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationTypeOfObjectAsTypeofType, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
+            }, value);
+        Vector&lt;SwitchCase, 3&gt; cases;
+        cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Undefined)), undefinedCase));
+        cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Object)), reallyObjectCase));
+        cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Function)), functionCase));
+        m_out.switchInstruction(m_out.castToInt32(result), cases, unreachable, Weight());
+        
+        m_out.appendTo(unreachable, notObjectCase);
+        m_out.unreachable();
+        
+        m_out.appendTo(notObjectCase, stringCase);
+        m_out.branch(
+            isString(value, provenType(child) &amp; (SpecCell - SpecObject)),
+            unsure(stringCase), unsure(symbolCase));
+        
+        m_out.appendTo(stringCase, symbolCase);
+        functor(TypeofType::String);
+        
+        m_out.appendTo(symbolCase, notCellCase);
+        functor(TypeofType::Symbol);
+        
+        m_out.appendTo(notCellCase, numberCase);
+        m_out.branch(
+            isNumber(value, provenType(child) &amp; ~SpecCell),
+            unsure(numberCase), unsure(notNumberCase));
+        
+        m_out.appendTo(numberCase, notNumberCase);
+        functor(TypeofType::Number);
+        
+        m_out.appendTo(notNumberCase, notNullCase);
+        LValue isNull;
+        if (provenType(child) &amp; SpecOther)
+            isNull = m_out.equal(value, m_out.constInt64(ValueNull));
+        else
+            isNull = m_out.booleanFalse;
+        m_out.branch(isNull, unsure(reallyObjectCase), unsure(notNullCase));
+        
+        m_out.appendTo(notNullCase, booleanCase);
+        m_out.branch(
+            isBoolean(value, provenType(child) &amp; ~(SpecCell | SpecFullNumber)),
+            unsure(booleanCase), unsure(undefinedCase));
+        
+        m_out.appendTo(booleanCase, undefinedCase);
+        functor(TypeofType::Boolean);
+        
+        m_out.appendTo(undefinedCase, lastNext);
+        functor(TypeofType::Undefined);
+    }
+    
+    LValue doubleToInt32(LValue doubleValue, double low, double high, bool isSigned = true)
+    {
+        LBasicBlock greatEnough = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 greatEnough&quot;));
+        LBasicBlock withinRange = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 withinRange&quot;));
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 slowPath&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 continuation&quot;));
+        
+        Vector&lt;ValueFromBlock, 2&gt; results;
+        
+        m_out.branch(
+            m_out.doubleGreaterThanOrEqual(doubleValue, m_out.constDouble(low)),
+            unsure(greatEnough), unsure(slowPath));
+        
+        LBasicBlock lastNext = m_out.appendTo(greatEnough, withinRange);
+        m_out.branch(
+            m_out.doubleLessThanOrEqual(doubleValue, m_out.constDouble(high)),
+            unsure(withinRange), unsure(slowPath));
+        
+        m_out.appendTo(withinRange, slowPath);
+        LValue fastResult;
+        if (isSigned)
+            fastResult = m_out.doubleToInt(doubleValue);
+        else
+            fastResult = m_out.doubleToUInt(doubleValue);
+        results.append(m_out.anchor(fastResult));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(slowPath, continuation);
+        results.append(m_out.anchor(m_out.call(m_out.int32, m_out.operation(toInt32), doubleValue)));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.int32, results);
+    }
+    
+    LValue doubleToInt32(LValue doubleValue)
+    {
+        if (Output::hasSensibleDoubleToInt())
+            return sensibleDoubleToInt32(doubleValue);
+        
+        double limit = pow(2, 31) - 1;
+        return doubleToInt32(doubleValue, -limit, limit);
+    }
+    
+    LValue sensibleDoubleToInt32(LValue doubleValue)
+    {
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;sensible doubleToInt32 slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;sensible doubleToInt32 continuation&quot;));
+
+        LValue fastResultValue = m_out.doubleToInt(doubleValue);
+        ValueFromBlock fastResult = m_out.anchor(fastResultValue);
+        m_out.branch(
+            m_out.equal(fastResultValue, m_out.constInt32(0x80000000)),
+            rarely(slowPath), usually(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
+        ValueFromBlock slowResult = m_out.anchor(
+            m_out.call(m_out.int32, m_out.operation(toInt32), doubleValue));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.int32, fastResult, slowResult);
+    }
+
+    // This is a mechanism for creating a code generator that fills in a gap in the code using our
+    // own MacroAssembler. This is useful for slow paths that involve a lot of code and we don't want
+    // to pay the price of B3 optimizing it. A lazy slow path will only be generated if it actually
+    // executes. On the other hand, a lazy slow path always incurs the cost of two additional jumps.
+    // Also, the lazy slow path's register allocation state is slaved to whatever B3 did, so you
+    // have to use a ScratchRegisterAllocator to try to use some unused registers and you may have
+    // to spill to top of stack if there aren't enough registers available.
+    //
+    // Lazy slow paths involve three different stages of execution. Each stage has unique
+    // capabilities and knowledge. The stages are:
+    //
+    // 1) DFG-&gt;B3 lowering, i.e. code that runs in this phase. Lowering is the last time you will
+    //    have access to LValues. If there is an LValue that needs to be fed as input to a lazy slow
+    //    path, then you must pass it as an argument here (as one of the varargs arguments after the
+    //    functor). But, lowering doesn't know which registers will be used for those LValues. Hence
+    //    you pass a lambda to lazySlowPath() and that lambda will run during stage (2):
+    //
+    // 2) FTLCompile.cpp's fixFunctionBasedOnStackMaps. This code is the only stage at which we know
+    //    the mapping from arguments passed to this method in (1) and the registers that B3
+    //    selected for those arguments. You don't actually want to generate any code here, since then
+    //    the slow path wouldn't actually be lazily generated. Instead, you want to save the
+    //    registers being used for the arguments and defer code generation to stage (3) by creating
+    //    and returning a LazySlowPath::Generator:
+    //
+    // 3) LazySlowPath's generate() method. This code runs in response to the lazy slow path
+    //    executing for the first time. It will call the generator you created in stage (2).
+    //
+    // Note that each time you invoke stage (1), stage (2) may be invoked zero, one, or many times.
+    // Stage (2) will usually be invoked once for stage (1). But, B3 may kill the code, in which
+    // case stage (2) won't run. B3 may duplicate the code (for example via tail duplication),
+    // leading to many calls to your stage (2) lambda. Stage (3) may be called zero or once for each
+    // stage (2). It will be called zero times if the slow path never runs. This is what you hope for
+    // whenever you use the lazySlowPath() mechanism.
+    //
+    // A typical use of lazySlowPath() will look like the example below, which just creates a slow
+    // path that adds some value to the input and returns it.
+    //
+    // // Stage (1) is here. This is your last chance to figure out which LValues to use as inputs.
+    // // Notice how we pass &quot;input&quot; as an argument to lazySlowPath().
+    // LValue input = ...;
+    // int addend = ...;
+    // LValue output = lazySlowPath(
+    //     [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+    //         // Stage (2) is here. This is your last chance to figure out which registers are used
+    //         // for which values. Location zero is always the return value. You can ignore it if
+    //         // you don't want to return anything. Location 1 is the register for the first
+    //         // argument to the lazySlowPath(), i.e. &quot;input&quot;. Note that the Location object could
+    //         // also hold an FPR, if you are passing a double.
+    //         GPRReg outputGPR = locations[0].directGPR();
+    //         GPRReg inputGPR = locations[1].directGPR();
+    //         return LazySlowPath::createGenerator(
+    //             [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
+    //                 // Stage (3) is here. This is when you generate code. You have access to the
+    //                 // registers you collected in stage (2) because this lambda closes over those
+    //                 // variables (outputGPR and inputGPR). You also have access to whatever extra
+    //                 // data you collected in stage (1), such as the addend in this case.
+    //                 jit.add32(TrustedImm32(addend), inputGPR, outputGPR);
+    //                 // You have to end by jumping to done. There is nothing to fall through to.
+    //                 // You can also jump to the exception handler (see LazySlowPath.h for more
+    //                 // info). Note that currently you cannot OSR exit.
+    //                 params.doneJumps.append(jit.jump());
+    //             });
+    //     },
+    //     input);
+    //
+    // You can basically pass as many inputs as you like, either using this varargs form, or by
+    // passing a Vector of LValues.
+    //
+    // Note that if your slow path is only doing a call, you can use the createLazyCallGenerator()
+    // helper. For example:
+    //
+    // LValue input = ...;
+    // LValue output = lazySlowPath(
+    //     [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+    //         return createLazyCallGenerator(
+    //             operationDoThings, locations[0].directGPR(), locations[1].directGPR());
+    //     }, input);
+    //
+    // Finally, note that all of the lambdas - both the stage (2) lambda and the stage (3) lambda -
+    // run after the function that created them returns. Hence, you should not use by-reference
+    // capture (i.e. [&amp;]) in any of these lambdas.
+    template&lt;typename Functor, typename... ArgumentTypes&gt;
+    LValue lazySlowPath(const Functor&amp; functor, ArgumentTypes... arguments)
+    {
+        return lazySlowPath(functor, Vector&lt;LValue&gt;{ arguments... });
+    }
+
+    template&lt;typename Functor&gt;
+    LValue lazySlowPath(const Functor&amp; functor, const Vector&lt;LValue&gt;&amp; userArguments)
+    {
+        CodeOrigin origin = m_node-&gt;origin.semantic;
+        
+        PatchpointValue* result = m_out.patchpoint(B3::Int64);
+        for (LValue arg : userArguments)
+            result-&gt;append(ConstrainedValue(arg, B3::ValueRep::SomeRegister));
+
+        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
+            preparePatchpointForExceptions(result);
+        
+        result-&gt;clobber(RegisterSet::macroScratchRegisters());
+        State* state = &amp;m_ftlState;
+
+        result-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
+                Vector&lt;Location&gt; locations;
+                for (const B3::ValueRep&amp; rep : params)
+                    locations.append(Location::forValueRep(rep));
+
+                RefPtr&lt;LazySlowPath::Generator&gt; generator = functor(locations);
+                
+                CCallHelpers::PatchableJump patchableJump = jit.patchableJump();
+                CCallHelpers::Label done = jit.label();
+
+                RegisterSet usedRegisters = params.unavailableRegisters();
+
+                RefPtr&lt;ExceptionTarget&gt; exceptionTarget =
+                    exceptionHandle-&gt;scheduleExitCreation(params);
+
+                // FIXME: As part of handling exceptions, we need to create a concrete OSRExit here.
+                // Doing so should automagically register late paths that emit exit thunks.
+
+                params.addLatePath(
+                    [=] (CCallHelpers&amp; jit) {
+                        AllowMacroScratchRegisterUsage allowScratch(jit);
+                        patchableJump.m_jump.link(&amp;jit);
+                        unsigned index = state-&gt;jitCode-&gt;lazySlowPaths.size();
+                        state-&gt;jitCode-&gt;lazySlowPaths.append(nullptr);
+                        jit.pushToSaveImmediateWithoutTouchingRegisters(
+                            CCallHelpers::TrustedImm32(index));
+                        CCallHelpers::Jump generatorJump = jit.jump();
+
+                        // Note that so long as we're here, we don't really know if our late path
+                        // runs before or after any other late paths that we might depend on, like
+                        // the exception thunk.
+
+                        RefPtr&lt;JITCode&gt; jitCode = state-&gt;jitCode;
+                        VM* vm = &amp;state-&gt;graph.m_vm;
+
+                        jit.addLinkTask(
+                            [=] (LinkBuffer&amp; linkBuffer) {
+                                linkBuffer.link(
+                                    generatorJump, CodeLocationLabel(
+                                        vm-&gt;getCTIStub(
+                                            lazySlowPathGenerationThunkGenerator).code()));
+                                
+                                CodeLocationJump linkedPatchableJump = CodeLocationJump(
+                                    linkBuffer.locationOf(patchableJump));
+                                CodeLocationLabel linkedDone = linkBuffer.locationOf(done);
+
+                                CallSiteIndex callSiteIndex =
+                                    jitCode-&gt;common.addUniqueCallSiteIndex(origin);
+                                    
+                                std::unique_ptr&lt;LazySlowPath&gt; lazySlowPath =
+                                    std::make_unique&lt;LazySlowPath&gt;(
+                                        linkedPatchableJump, linkedDone,
+                                        exceptionTarget-&gt;label(linkBuffer), usedRegisters,
+                                        callSiteIndex, generator);
+                                    
+                                jitCode-&gt;lazySlowPaths[index] = WTFMove(lazySlowPath);
+                            });
+                    });
+            });
+        return result;
+    }
+    
+    void speculate(
+        ExitKind kind, FormattedValue lowValue, Node* highValue, LValue failCondition)
+    {
+        appendOSRExit(kind, lowValue, highValue, failCondition, m_origin);
+    }
+    
+    void terminate(ExitKind kind)
+    {
+        speculate(kind, noValue(), nullptr, m_out.booleanTrue);
+        didAlreadyTerminate();
+    }
+    
+    void didAlreadyTerminate()
+    {
+        m_state.setIsValid(false);
+    }
+    
+    void typeCheck(
+        FormattedValue lowValue, Edge highValue, SpeculatedType typesPassedThrough,
+        LValue failCondition, ExitKind exitKind = BadType)
+    {
+        appendTypeCheck(lowValue, highValue, typesPassedThrough, failCondition, exitKind);
+    }
+    
+    void appendTypeCheck(
+        FormattedValue lowValue, Edge highValue, SpeculatedType typesPassedThrough,
+        LValue failCondition, ExitKind exitKind)
+    {
+        if (!m_interpreter.needsTypeCheck(highValue, typesPassedThrough))
+            return;
+        ASSERT(mayHaveTypeCheck(highValue.useKind()));
+        appendOSRExit(exitKind, lowValue, highValue.node(), failCondition, m_origin);
+        m_interpreter.filter(highValue, typesPassedThrough);
+    }
+    
+    LValue lowInt32(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || (edge.useKind() == Int32Use || edge.useKind() == KnownInt32Use));
+        
+        if (edge-&gt;hasConstant()) {
+            JSValue value = edge-&gt;asJSValue();
+            if (!value.isInt32()) {
+                terminate(Uncountable);
+                return m_out.int32Zero;
+            }
+            return m_out.constInt32(value.asInt32());
+        }
+        
+        LoweredNodeValue value = m_int32Values.get(edge.node());
+        if (isValid(value))
+            return value.value();
+        
+        value = m_strictInt52Values.get(edge.node());
+        if (isValid(value))
+            return strictInt52ToInt32(edge, value.value());
+        
+        value = m_int52Values.get(edge.node());
+        if (isValid(value))
+            return strictInt52ToInt32(edge, int52ToStrictInt52(value.value()));
+        
+        value = m_jsValueValues.get(edge.node());
+        if (isValid(value)) {
+            LValue boxedResult = value.value();
+            FTL_TYPE_CHECK(
+                jsValueValue(boxedResult), edge, SpecInt32, isNotInt32(boxedResult));
+            LValue result = unboxInt32(boxedResult);
+            setInt32(edge.node(), result);
+            return result;
+        }
+
+        DFG_ASSERT(m_graph, m_node, !(provenType(edge) &amp; SpecInt32));
+        terminate(Uncountable);
+        return m_out.int32Zero;
+    }
+    
+    enum Int52Kind { StrictInt52, Int52 };
+    LValue lowInt52(Edge edge, Int52Kind kind)
+    {
+        DFG_ASSERT(m_graph, m_node, edge.useKind() == Int52RepUse);
+        
+        LoweredNodeValue value;
+        
+        switch (kind) {
+        case Int52:
+            value = m_int52Values.get(edge.node());
+            if (isValid(value))
+                return value.value();
+            
+            value = m_strictInt52Values.get(edge.node());
+            if (isValid(value))
+                return strictInt52ToInt52(value.value());
+            break;
+            
+        case StrictInt52:
+            value = m_strictInt52Values.get(edge.node());
+            if (isValid(value))
+                return value.value();
+            
+            value = m_int52Values.get(edge.node());
+            if (isValid(value))
+                return int52ToStrictInt52(value.value());
+            break;
+        }
+
+        DFG_ASSERT(m_graph, m_node, !provenType(edge));
+        terminate(Uncountable);
+        return m_out.int64Zero;
+    }
+    
+    LValue lowInt52(Edge edge)
+    {
+        return lowInt52(edge, Int52);
+    }
+    
+    LValue lowStrictInt52(Edge edge)
+    {
+        return lowInt52(edge, StrictInt52);
+    }
+    
+    bool betterUseStrictInt52(Node* node)
+    {
+        return !isValid(m_int52Values.get(node));
+    }
+    bool betterUseStrictInt52(Edge edge)
+    {
+        return betterUseStrictInt52(edge.node());
+    }
+    template&lt;typename T&gt;
+    Int52Kind bestInt52Kind(T node)
+    {
+        return betterUseStrictInt52(node) ? StrictInt52 : Int52;
+    }
+    Int52Kind opposite(Int52Kind kind)
+    {
+        switch (kind) {
+        case Int52:
+            return StrictInt52;
+        case StrictInt52:
+            return Int52;
+        }
+        DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
+        return Int52;
+    }
+    
+    LValue lowWhicheverInt52(Edge edge, Int52Kind&amp; kind)
+    {
+        kind = bestInt52Kind(edge);
+        return lowInt52(edge, kind);
+    }
+    
+    LValue lowCell(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        DFG_ASSERT(m_graph, m_node, mode == ManualOperandSpeculation || DFG::isCell(edge.useKind()));
+        
+        if (edge-&gt;op() == JSConstant) {
+            JSValue value = edge-&gt;asJSValue();
+            if (!value.isCell()) {
+                terminate(Uncountable);
+                return m_out.intPtrZero;
+            }
+            return m_out.constIntPtr(value.asCell());
+        }
+        
+        LoweredNodeValue value = m_jsValueValues.get(edge.node());
+        if (isValid(value)) {
+            LValue uncheckedValue = value.value();
+            FTL_TYPE_CHECK(
+                jsValueValue(uncheckedValue), edge, SpecCell, isNotCell(uncheckedValue));
+            return uncheckedValue;
+        }
+        
+        DFG_ASSERT(m_graph, m_node, !(provenType(edge) &amp; SpecCell));
+        terminate(Uncountable);
+        return m_out.intPtrZero;
+    }
+    
+    LValue lowObject(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == ObjectUse);
+        
+        LValue result = lowCell(edge, mode);
+        speculateObject(edge, result);
+        return result;
+    }
+    
+    LValue lowString(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == StringUse || edge.useKind() == KnownStringUse || edge.useKind() == StringIdentUse);
+        
+        LValue result = lowCell(edge, mode);
+        speculateString(edge, result);
+        return result;
+    }
+    
+    LValue lowStringIdent(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == StringIdentUse);
+        
+        LValue string = lowString(edge, mode);
+        LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value);
+        speculateStringIdent(edge, string, stringImpl);
+        return stringImpl;
+    }
+
+    LValue lowSymbol(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == SymbolUse);
+
+        LValue result = lowCell(edge, mode);
+        speculateSymbol(edge, result);
+        return result;
+    }
+
+    LValue lowNonNullObject(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == ObjectUse);
+        
+        LValue result = lowCell(edge, mode);
+        speculateNonNullObject(edge, result);
+        return result;
+    }
+    
+    LValue lowBoolean(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == BooleanUse || edge.useKind() == KnownBooleanUse);
+        
+        if (edge-&gt;hasConstant()) {
+            JSValue value = edge-&gt;asJSValue();
+            if (!value.isBoolean()) {
+                terminate(Uncountable);
+                return m_out.booleanFalse;
+            }
+            return m_out.constBool(value.asBoolean());
+        }
+        
+        LoweredNodeValue value = m_booleanValues.get(edge.node());
+        if (isValid(value))
+            return value.value();
+        
+        value = m_jsValueValues.get(edge.node());
+        if (isValid(value)) {
+            LValue unboxedResult = value.value();
+            FTL_TYPE_CHECK(
+                jsValueValue(unboxedResult), edge, SpecBoolean, isNotBoolean(unboxedResult));
+            LValue result = unboxBoolean(unboxedResult);
+            setBoolean(edge.node(), result);
+            return result;
+        }
+        
+        DFG_ASSERT(m_graph, m_node, !(provenType(edge) &amp; SpecBoolean));
+        terminate(Uncountable);
+        return m_out.booleanFalse;
+    }
+    
+    LValue lowDouble(Edge edge)
+    {
+        DFG_ASSERT(m_graph, m_node, isDouble(edge.useKind()));
+        
+        LoweredNodeValue value = m_doubleValues.get(edge.node());
+        if (isValid(value))
+            return value.value();
+        DFG_ASSERT(m_graph, m_node, !provenType(edge));
+        terminate(Uncountable);
+        return m_out.doubleZero;
+    }
+    
+    LValue lowJSValue(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
+    {
+        DFG_ASSERT(m_graph, m_node, mode == ManualOperandSpeculation || edge.useKind() == UntypedUse);
+        DFG_ASSERT(m_graph, m_node, !isDouble(edge.useKind()));
+        DFG_ASSERT(m_graph, m_node, edge.useKind() != Int52RepUse);
+        
+        if (edge-&gt;hasConstant())
+            return m_out.constInt64(JSValue::encode(edge-&gt;asJSValue()));
+
+        LoweredNodeValue value = m_jsValueValues.get(edge.node());
+        if (isValid(value))
+            return value.value();
+        
+        value = m_int32Values.get(edge.node());
+        if (isValid(value)) {
+            LValue result = boxInt32(value.value());
+            setJSValue(edge.node(), result);
+            return result;
+        }
+        
+        value = m_booleanValues.get(edge.node());
+        if (isValid(value)) {
+            LValue result = boxBoolean(value.value());
+            setJSValue(edge.node(), result);
+            return result;
+        }
+        
+        DFG_CRASH(m_graph, m_node, &quot;Value not defined&quot;);
+        return 0;
+    }
+    
+    LValue lowStorage(Edge edge)
+    {
+        LoweredNodeValue value = m_storageValues.get(edge.node());
+        if (isValid(value))
+            return value.value();
+        
+        LValue result = lowCell(edge);
+        setStorage(edge.node(), result);
+        return result;
+    }
+    
+    LValue strictInt52ToInt32(Edge edge, LValue value)
+    {
+        LValue result = m_out.castToInt32(value);
+        FTL_TYPE_CHECK(
+            noValue(), edge, SpecInt32,
+            m_out.notEqual(m_out.signExt32To64(result), value));
+        setInt32(edge.node(), result);
+        return result;
+    }
+    
+    LValue strictInt52ToDouble(LValue value)
+    {
+        return m_out.intToDouble(value);
+    }
+    
+    LValue strictInt52ToJSValue(LValue value)
+    {
+        LBasicBlock isInt32 = FTL_NEW_BLOCK(m_out, (&quot;strictInt52ToJSValue isInt32 case&quot;));
+        LBasicBlock isDouble = FTL_NEW_BLOCK(m_out, (&quot;strictInt52ToJSValue isDouble case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;strictInt52ToJSValue continuation&quot;));
+        
+        Vector&lt;ValueFromBlock, 2&gt; results;
+            
+        LValue int32Value = m_out.castToInt32(value);
+        m_out.branch(
+            m_out.equal(m_out.signExt32To64(int32Value), value),
+            unsure(isInt32), unsure(isDouble));
+        
+        LBasicBlock lastNext = m_out.appendTo(isInt32, isDouble);
+        
+        results.append(m_out.anchor(boxInt32(int32Value)));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(isDouble, continuation);
+        
+        results.append(m_out.anchor(boxDouble(m_out.intToDouble(value))));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(m_out.int64, results);
+    }
+    
+    LValue strictInt52ToInt52(LValue value)
+    {
+        return m_out.shl(value, m_out.constInt64(JSValue::int52ShiftAmount));
+    }
+    
+    LValue int52ToStrictInt52(LValue value)
+    {
+        return m_out.aShr(value, m_out.constInt64(JSValue::int52ShiftAmount));
+    }
+    
+    LValue isInt32(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecInt32))
+            return proven;
+        return m_out.aboveOrEqual(jsValue, m_tagTypeNumber);
+    }
+    LValue isNotInt32(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~SpecInt32))
+            return proven;
+        return m_out.below(jsValue, m_tagTypeNumber);
+    }
+    LValue unboxInt32(LValue jsValue)
+    {
+        return m_out.castToInt32(jsValue);
+    }
+    LValue boxInt32(LValue value)
+    {
+        return m_out.add(m_out.zeroExt(value, m_out.int64), m_tagTypeNumber);
+    }
+    
+    LValue isCellOrMisc(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecCell | SpecMisc))
+            return proven;
+        return m_out.testIsZero64(jsValue, m_tagTypeNumber);
+    }
+    LValue isNotCellOrMisc(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~(SpecCell | SpecMisc)))
+            return proven;
+        return m_out.testNonZero64(jsValue, m_tagTypeNumber);
+    }
+    
+    LValue unboxDouble(LValue jsValue)
+    {
+        return m_out.bitCast(m_out.add(jsValue, m_tagTypeNumber), m_out.doubleType);
+    }
+    LValue boxDouble(LValue doubleValue)
+    {
+        return m_out.sub(m_out.bitCast(doubleValue, m_out.int64), m_tagTypeNumber);
+    }
+    
+    LValue jsValueToStrictInt52(Edge edge, LValue boxedValue)
+    {
+        LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToInt52 unboxing int case&quot;));
+        LBasicBlock doubleCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToInt52 unboxing double case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;jsValueToInt52 unboxing continuation&quot;));
+            
+        LValue isNotInt32;
+        if (!m_interpreter.needsTypeCheck(edge, SpecInt32))
+            isNotInt32 = m_out.booleanFalse;
+        else if (!m_interpreter.needsTypeCheck(edge, ~SpecInt32))
+            isNotInt32 = m_out.booleanTrue;
+        else
+            isNotInt32 = this-&gt;isNotInt32(boxedValue);
+        m_out.branch(isNotInt32, unsure(doubleCase), unsure(intCase));
+            
+        LBasicBlock lastNext = m_out.appendTo(intCase, doubleCase);
+            
+        ValueFromBlock intToInt52 = m_out.anchor(
+            m_out.signExt32To64(unboxInt32(boxedValue)));
+        m_out.jump(continuation);
+            
+        m_out.appendTo(doubleCase, continuation);
+        
+        LValue possibleResult = m_out.call(
+            m_out.int64, m_out.operation(operationConvertBoxedDoubleToInt52), boxedValue);
+        FTL_TYPE_CHECK(
+            jsValueValue(boxedValue), edge, SpecInt32 | SpecInt52AsDouble,
+            m_out.equal(possibleResult, m_out.constInt64(JSValue::notInt52)));
+            
+        ValueFromBlock doubleToInt52 = m_out.anchor(possibleResult);
+        m_out.jump(continuation);
+            
+        m_out.appendTo(continuation, lastNext);
+            
+        return m_out.phi(m_out.int64, intToInt52, doubleToInt52);
+    }
+    
+    LValue doubleToStrictInt52(Edge edge, LValue value)
+    {
+        LValue possibleResult = m_out.call(
+            m_out.int64, m_out.operation(operationConvertDoubleToInt52), value);
+        FTL_TYPE_CHECK_WITH_EXIT_KIND(Int52Overflow,
+            doubleValue(value), edge, SpecInt52AsDouble,
+            m_out.equal(possibleResult, m_out.constInt64(JSValue::notInt52)));
+        
+        return possibleResult;
+    }
+
+    LValue convertDoubleToInt32(LValue value, bool shouldCheckNegativeZero)
+    {
+        LValue integerValue = m_out.doubleToInt(value);
+        LValue integerValueConvertedToDouble = m_out.intToDouble(integerValue);
+        LValue valueNotConvertibleToInteger = m_out.doubleNotEqualOrUnordered(value, integerValueConvertedToDouble);
+        speculate(Overflow, FormattedValue(DataFormatDouble, value), m_node, valueNotConvertibleToInteger);
+
+        if (shouldCheckNegativeZero) {
+            LBasicBlock valueIsZero = FTL_NEW_BLOCK(m_out, (&quot;ConvertDoubleToInt32 on zero&quot;));
+            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ConvertDoubleToInt32 continuation&quot;));
+            m_out.branch(m_out.isZero32(integerValue), unsure(valueIsZero), unsure(continuation));
+
+            LBasicBlock lastNext = m_out.appendTo(valueIsZero, continuation);
+
+            LValue doubleBitcastToInt64 = m_out.bitCast(value, m_out.int64);
+            LValue signBitSet = m_out.lessThan(doubleBitcastToInt64, m_out.constInt64(0));
+
+            speculate(NegativeZero, FormattedValue(DataFormatDouble, value), m_node, signBitSet);
+            m_out.jump(continuation);
+            m_out.appendTo(continuation, lastNext);
+        }
+        return integerValue;
+    }
+
+    LValue isNumber(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecFullNumber))
+            return proven;
+        return isNotCellOrMisc(jsValue);
+    }
+    LValue isNotNumber(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~SpecFullNumber))
+            return proven;
+        return isCellOrMisc(jsValue);
+    }
+    
+    LValue isNotCell(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~SpecCell))
+            return proven;
+        return m_out.testNonZero64(jsValue, m_tagMask);
+    }
+    
+    LValue isCell(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecCell))
+            return proven;
+        return m_out.testIsZero64(jsValue, m_tagMask);
+    }
+    
+    LValue isNotMisc(LValue value, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~SpecMisc))
+            return proven;
+        return m_out.above(value, m_out.constInt64(TagBitTypeOther | TagBitBool | TagBitUndefined));
+    }
+    
+    LValue isMisc(LValue value, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecMisc))
+            return proven;
+        return m_out.logicalNot(isNotMisc(value));
+    }
+    
+    LValue isNotBoolean(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~SpecBoolean))
+            return proven;
+        return m_out.testNonZero64(
+            m_out.bitXor(jsValue, m_out.constInt64(ValueFalse)),
+            m_out.constInt64(~1));
+    }
+    LValue isBoolean(LValue jsValue, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecBoolean))
+            return proven;
+        return m_out.logicalNot(isNotBoolean(jsValue));
+    }
+    LValue unboxBoolean(LValue jsValue)
+    {
+        // We want to use a cast that guarantees that B3 knows that even the integer
+        // value is just 0 or 1. But for now we do it the dumb way.
+        return m_out.notZero64(m_out.bitAnd(jsValue, m_out.constInt64(1)));
+    }
+    LValue boxBoolean(LValue value)
+    {
+        return m_out.select(
+            value, m_out.constInt64(ValueTrue), m_out.constInt64(ValueFalse));
+    }
+    
+    LValue isNotOther(LValue value, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, ~SpecOther))
+            return proven;
+        return m_out.notEqual(
+            m_out.bitAnd(value, m_out.constInt64(~TagBitUndefined)),
+            m_out.constInt64(ValueNull));
+    }
+    LValue isOther(LValue value, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type, SpecOther))
+            return proven;
+        return m_out.equal(
+            m_out.bitAnd(value, m_out.constInt64(~TagBitUndefined)),
+            m_out.constInt64(ValueNull));
+    }
+    
+    LValue isProvenValue(SpeculatedType provenType, SpeculatedType wantedType)
+    {
+        if (!(provenType &amp; ~wantedType))
+            return m_out.booleanTrue;
+        if (!(provenType &amp; wantedType))
+            return m_out.booleanFalse;
+        return nullptr;
+    }
+
+    void speculate(Edge edge)
+    {
+        switch (edge.useKind()) {
+        case UntypedUse:
+            break;
+        case KnownInt32Use:
+        case KnownStringUse:
+        case KnownPrimitiveUse:
+        case DoubleRepUse:
+        case Int52RepUse:
+            ASSERT(!m_interpreter.needsTypeCheck(edge));
+            break;
+        case Int32Use:
+            speculateInt32(edge);
+            break;
+        case CellUse:
+            speculateCell(edge);
+            break;
+        case CellOrOtherUse:
+            speculateCellOrOther(edge);
+            break;
+        case KnownCellUse:
+            ASSERT(!m_interpreter.needsTypeCheck(edge));
+            break;
+        case MachineIntUse:
+            speculateMachineInt(edge);
+            break;
+        case ObjectUse:
+            speculateObject(edge);
+            break;
+        case FunctionUse:
+            speculateFunction(edge);
+            break;
+        case ObjectOrOtherUse:
+            speculateObjectOrOther(edge);
+            break;
+        case FinalObjectUse:
+            speculateFinalObject(edge);
+            break;
+        case StringUse:
+            speculateString(edge);
+            break;
+        case StringIdentUse:
+            speculateStringIdent(edge);
+            break;
+        case SymbolUse:
+            speculateSymbol(edge);
+            break;
+        case StringObjectUse:
+            speculateStringObject(edge);
+            break;
+        case StringOrStringObjectUse:
+            speculateStringOrStringObject(edge);
+            break;
+        case NumberUse:
+            speculateNumber(edge);
+            break;
+        case RealNumberUse:
+            speculateRealNumber(edge);
+            break;
+        case DoubleRepRealUse:
+            speculateDoubleRepReal(edge);
+            break;
+        case DoubleRepMachineIntUse:
+            speculateDoubleRepMachineInt(edge);
+            break;
+        case BooleanUse:
+            speculateBoolean(edge);
+            break;
+        case NotStringVarUse:
+            speculateNotStringVar(edge);
+            break;
+        case NotCellUse:
+            speculateNotCell(edge);
+            break;
+        case OtherUse:
+            speculateOther(edge);
+            break;
+        case MiscUse:
+            speculateMisc(edge);
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, &quot;Unsupported speculation use kind&quot;);
+        }
+    }
+    
+    void speculate(Node*, Edge edge)
+    {
+        speculate(edge);
+    }
+    
+    void speculateInt32(Edge edge)
+    {
+        lowInt32(edge);
+    }
+    
+    void speculateCell(Edge edge)
+    {
+        lowCell(edge);
+    }
+    
+    void speculateCellOrOther(Edge edge)
+    {
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+
+        LBasicBlock isNotCell = FTL_NEW_BLOCK(m_out, (&quot;Speculate CellOrOther not cell&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Speculate CellOrOther continuation&quot;));
+
+        m_out.branch(isCell(value, provenType(edge)), unsure(continuation), unsure(isNotCell));
+
+        LBasicBlock lastNext = m_out.appendTo(isNotCell, continuation);
+        FTL_TYPE_CHECK(jsValueValue(value), edge, SpecCell | SpecOther, isNotOther(value));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void speculateMachineInt(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        jsValueToStrictInt52(edge, lowJSValue(edge, ManualOperandSpeculation));
+    }
+    
+    LValue isObject(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, SpecObject))
+            return proven;
+        return m_out.aboveOrEqual(
+            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
+            m_out.constInt32(ObjectType));
+    }
+
+    LValue isNotObject(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecObject))
+            return proven;
+        return m_out.below(
+            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
+            m_out.constInt32(ObjectType));
+    }
+
+    LValue isNotString(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecString))
+            return proven;
+        return m_out.notEqual(
+            m_out.load32(cell, m_heaps.JSCell_structureID),
+            m_out.constInt32(vm().stringStructure-&gt;id()));
+    }
+    
+    LValue isString(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, SpecString))
+            return proven;
+        return m_out.equal(
+            m_out.load32(cell, m_heaps.JSCell_structureID),
+            m_out.constInt32(vm().stringStructure-&gt;id()));
+    }
+
+    LValue isNotSymbol(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecSymbol))
+            return proven;
+        return m_out.notEqual(
+            m_out.load32(cell, m_heaps.JSCell_structureID),
+            m_out.constInt32(vm().symbolStructure-&gt;id()));
+    }
+
+    LValue isArrayType(LValue cell, ArrayMode arrayMode)
+    {
+        switch (arrayMode.type()) {
+        case Array::Int32:
+        case Array::Double:
+        case Array::Contiguous: {
+            LValue indexingType = m_out.load8ZeroExt32(cell, m_heaps.JSCell_indexingType);
+            
+            switch (arrayMode.arrayClass()) {
+            case Array::OriginalArray:
+                DFG_CRASH(m_graph, m_node, &quot;Unexpected original array&quot;);
+                return 0;
+                
+            case Array::Array:
+                return m_out.equal(
+                    m_out.bitAnd(indexingType, m_out.constInt32(IsArray | IndexingShapeMask)),
+                    m_out.constInt32(IsArray | arrayMode.shapeMask()));
+                
+            case Array::NonArray:
+            case Array::OriginalNonArray:
+                return m_out.equal(
+                    m_out.bitAnd(indexingType, m_out.constInt32(IsArray | IndexingShapeMask)),
+                    m_out.constInt32(arrayMode.shapeMask()));
+                
+            case Array::PossiblyArray:
+                return m_out.equal(
+                    m_out.bitAnd(indexingType, m_out.constInt32(IndexingShapeMask)),
+                    m_out.constInt32(arrayMode.shapeMask()));
+            }
+            
+            DFG_CRASH(m_graph, m_node, &quot;Corrupt array class&quot;);
+        }
+            
+        case Array::DirectArguments:
+            return m_out.equal(
+                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
+                m_out.constInt32(DirectArgumentsType));
+            
+        case Array::ScopedArguments:
+            return m_out.equal(
+                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
+                m_out.constInt32(ScopedArgumentsType));
+            
+        default:
+            return m_out.equal(
+                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
+                m_out.constInt32(typeForTypedArrayType(arrayMode.typedArrayType())));
+        }
+    }
+    
+    LValue isFunction(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, SpecFunction))
+            return proven;
+        return isType(cell, JSFunctionType);
+    }
+    LValue isNotFunction(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecFunction))
+            return proven;
+        return isNotType(cell, JSFunctionType);
+    }
+            
+    LValue isExoticForTypeof(LValue cell, SpeculatedType type = SpecFullTop)
+    {
+        if (!(type &amp; SpecObjectOther))
+            return m_out.booleanFalse;
+        return m_out.testNonZero32(
+            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoFlags),
+            m_out.constInt32(MasqueradesAsUndefined | TypeOfShouldCallGetCallData));
+    }
+    
+    LValue isType(LValue cell, JSType type)
+    {
+        return m_out.equal(
+            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
+            m_out.constInt32(type));
+    }
+    
+    LValue isNotType(LValue cell, JSType type)
+    {
+        return m_out.logicalNot(isType(cell, type));
+    }
+    
+    void speculateObject(Edge edge, LValue cell)
+    {
+        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecObject, isNotObject(cell));
+    }
+    
+    void speculateObject(Edge edge)
+    {
+        speculateObject(edge, lowCell(edge));
+    }
+    
+    void speculateFunction(Edge edge, LValue cell)
+    {
+        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecFunction, isNotFunction(cell));
+    }
+    
+    void speculateFunction(Edge edge)
+    {
+        speculateFunction(edge, lowCell(edge));
+    }
+    
+    void speculateObjectOrOther(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        
+        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;speculateObjectOrOther cell case&quot;));
+        LBasicBlock primitiveCase = FTL_NEW_BLOCK(m_out, (&quot;speculateObjectOrOther primitive case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;speculateObjectOrOther continuation&quot;));
+        
+        m_out.branch(isNotCell(value, provenType(edge)), unsure(primitiveCase), unsure(cellCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(cellCase, primitiveCase);
+        
+        FTL_TYPE_CHECK(
+            jsValueValue(value), edge, (~SpecCell) | SpecObject, isNotObject(value));
+        
+        m_out.jump(continuation);
+        
+        m_out.appendTo(primitiveCase, continuation);
+        
+        FTL_TYPE_CHECK(
+            jsValueValue(value), edge, SpecCell | SpecOther, isNotOther(value));
+        
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void speculateFinalObject(Edge edge, LValue cell)
+    {
+        FTL_TYPE_CHECK(
+            jsValueValue(cell), edge, SpecFinalObject, isNotType(cell, FinalObjectType));
+    }
+    
+    void speculateFinalObject(Edge edge)
+    {
+        speculateFinalObject(edge, lowCell(edge));
+    }
+    
+    void speculateString(Edge edge, LValue cell)
+    {
+        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecString | ~SpecCell, isNotString(cell));
+    }
+    
+    void speculateString(Edge edge)
+    {
+        speculateString(edge, lowCell(edge));
+    }
+    
+    void speculateStringIdent(Edge edge, LValue string, LValue stringImpl)
+    {
+        if (!m_interpreter.needsTypeCheck(edge, SpecStringIdent | ~SpecString))
+            return;
+        
+        speculate(BadType, jsValueValue(string), edge.node(), m_out.isNull(stringImpl));
+        speculate(
+            BadType, jsValueValue(string), edge.node(),
+            m_out.testIsZero32(
+                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
+                m_out.constInt32(StringImpl::flagIsAtomic())));
+        m_interpreter.filter(edge, SpecStringIdent | ~SpecString);
+    }
+    
+    void speculateStringIdent(Edge edge)
+    {
+        lowStringIdent(edge);
+    }
+    
+    void speculateStringObject(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge, SpecStringObject))
+            return;
+        
+        speculateStringObjectForCell(edge, lowCell(edge));
+        m_interpreter.filter(edge, SpecStringObject);
+    }
+    
+    void speculateStringOrStringObject(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge, SpecString | SpecStringObject))
+            return;
+        
+        LBasicBlock notString = FTL_NEW_BLOCK(m_out, (&quot;Speculate StringOrStringObject not string case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Speculate StringOrStringObject continuation&quot;));
+        
+        LValue structureID = m_out.load32(lowCell(edge), m_heaps.JSCell_structureID);
+        m_out.branch(
+            m_out.equal(structureID, m_out.constInt32(vm().stringStructure-&gt;id())),
+            unsure(continuation), unsure(notString));
+        
+        LBasicBlock lastNext = m_out.appendTo(notString, continuation);
+        speculateStringObjectForStructureID(edge, structureID);
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+        
+        m_interpreter.filter(edge, SpecString | SpecStringObject);
+    }
+    
+    void speculateStringObjectForCell(Edge edge, LValue cell)
+    {
+        speculateStringObjectForStructureID(edge, m_out.load32(cell, m_heaps.JSCell_structureID));
+    }
+    
+    void speculateStringObjectForStructureID(Edge edge, LValue structureID)
+    {
+        Structure* stringObjectStructure =
+            m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;stringObjectStructure();
+        
+        if (abstractStructure(edge).isSubsetOf(StructureSet(stringObjectStructure)))
+            return;
+        
+        speculate(
+            NotStringObject, noValue(), 0,
+            m_out.notEqual(structureID, weakStructureID(stringObjectStructure)));
+    }
+
+    void speculateSymbol(Edge edge, LValue cell)
+    {
+        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecSymbol | ~SpecCell, isNotSymbol(cell));
+    }
+
+    void speculateSymbol(Edge edge)
+    {
+        speculateSymbol(edge, lowCell(edge));
+    }
+
+    void speculateNonNullObject(Edge edge, LValue cell)
+    {
+        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecObject, isNotObject(cell));
+        if (masqueradesAsUndefinedWatchpointIsStillValid())
+            return;
+        
+        speculate(
+            BadType, jsValueValue(cell), edge.node(),
+            m_out.testNonZero32(
+                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoFlags),
+                m_out.constInt32(MasqueradesAsUndefined)));
+    }
+    
+    void speculateNumber(Edge edge)
+    {
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        FTL_TYPE_CHECK(jsValueValue(value), edge, SpecBytecodeNumber, isNotNumber(value));
+    }
+    
+    void speculateRealNumber(Edge edge)
+    {
+        // Do an early return here because lowDouble() can create a lot of control flow.
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        LValue doubleValue = unboxDouble(value);
+        
+        LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;speculateRealNumber int case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;speculateRealNumber continuation&quot;));
+        
+        m_out.branch(
+            m_out.doubleEqual(doubleValue, doubleValue),
+            usually(continuation), rarely(intCase));
+        
+        LBasicBlock lastNext = m_out.appendTo(intCase, continuation);
+        
+        typeCheck(
+            jsValueValue(value), m_node-&gt;child1(), SpecBytecodeRealNumber,
+            isNotInt32(value, provenType(m_node-&gt;child1()) &amp; ~SpecFullDouble));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void speculateDoubleRepReal(Edge edge)
+    {
+        // Do an early return here because lowDouble() can create a lot of control flow.
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        LValue value = lowDouble(edge);
+        FTL_TYPE_CHECK(
+            doubleValue(value), edge, SpecDoubleReal,
+            m_out.doubleNotEqualOrUnordered(value, value));
+    }
+    
+    void speculateDoubleRepMachineInt(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        doubleToStrictInt52(edge, lowDouble(edge));
+    }
+    
+    void speculateBoolean(Edge edge)
+    {
+        lowBoolean(edge);
+    }
+    
+    void speculateNotStringVar(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge, ~SpecStringVar))
+            return;
+        
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        
+        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;Speculate NotStringVar is cell case&quot;));
+        LBasicBlock isStringCase = FTL_NEW_BLOCK(m_out, (&quot;Speculate NotStringVar is string case&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Speculate NotStringVar continuation&quot;));
+        
+        m_out.branch(isCell(value, provenType(edge)), unsure(isCellCase), unsure(continuation));
+        
+        LBasicBlock lastNext = m_out.appendTo(isCellCase, isStringCase);
+        m_out.branch(isString(value, provenType(edge)), unsure(isStringCase), unsure(continuation));
+        
+        m_out.appendTo(isStringCase, continuation);
+        speculateStringIdent(edge, value, m_out.loadPtr(value, m_heaps.JSString_value));
+        m_out.jump(continuation);
+        
+        m_out.appendTo(continuation, lastNext);
+    }
+    
+    void speculateNotCell(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        typeCheck(jsValueValue(value), edge, ~SpecCell, isCell(value));
+    }
+    
+    void speculateOther(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        typeCheck(jsValueValue(value), edge, SpecOther, isNotOther(value));
+    }
+    
+    void speculateMisc(Edge edge)
+    {
+        if (!m_interpreter.needsTypeCheck(edge))
+            return;
+        
+        LValue value = lowJSValue(edge, ManualOperandSpeculation);
+        typeCheck(jsValueValue(value), edge, SpecMisc, isNotMisc(value));
+    }
+    
+    bool masqueradesAsUndefinedWatchpointIsStillValid()
+    {
+        return m_graph.masqueradesAsUndefinedWatchpointIsStillValid(m_node-&gt;origin.semantic);
+    }
+    
+    LValue loadCellState(LValue base)
+    {
+        return m_out.load8ZeroExt32(base, m_heaps.JSCell_cellState);
+    }
+
+    void emitStoreBarrier(LValue base)
+    {
+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;Store barrier slow path&quot;));
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Store barrier continuation&quot;));
+
+        m_out.branch(
+            m_out.notZero32(loadCellState(base)), usually(continuation), rarely(slowPath));
+
+        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
+
+        // We emit the store barrier slow path lazily. In a lot of cases, this will never fire. And
+        // when it does fire, it makes sense for us to generate this code using our JIT rather than
+        // wasting B3's time optimizing it.
+        lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                GPRReg baseGPR = locations[1].directGPR();
+
+                return LazySlowPath::createGenerator(
+                    [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
+                        RegisterSet usedRegisters = params.lazySlowPath-&gt;usedRegisters();
+                        ScratchRegisterAllocator scratchRegisterAllocator(usedRegisters);
+                        scratchRegisterAllocator.lock(baseGPR);
+
+                        GPRReg scratch1 = scratchRegisterAllocator.allocateScratchGPR();
+                        GPRReg scratch2 = scratchRegisterAllocator.allocateScratchGPR();
+
+                        ScratchRegisterAllocator::PreservedState preservedState =
+                            scratchRegisterAllocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
+
+                        // We've already saved these, so when we make a slow path call, we don't have
+                        // to save them again.
+                        usedRegisters.exclude(RegisterSet(scratch1, scratch2));
+
+                        WriteBarrierBuffer&amp; writeBarrierBuffer = jit.vm()-&gt;heap.writeBarrierBuffer();
+                        jit.load32(writeBarrierBuffer.currentIndexAddress(), scratch2);
+                        CCallHelpers::Jump needToFlush = jit.branch32(
+                            CCallHelpers::AboveOrEqual, scratch2,
+                            CCallHelpers::TrustedImm32(writeBarrierBuffer.capacity()));
+
+                        jit.add32(CCallHelpers::TrustedImm32(1), scratch2);
+                        jit.store32(scratch2, writeBarrierBuffer.currentIndexAddress());
+
+                        jit.move(CCallHelpers::TrustedImmPtr(writeBarrierBuffer.buffer()), scratch1);
+                        jit.storePtr(
+                            baseGPR,
+                            CCallHelpers::BaseIndex(
+                                scratch1, scratch2, CCallHelpers::ScalePtr,
+                                static_cast&lt;int32_t&gt;(-sizeof(void*))));
+
+                        scratchRegisterAllocator.restoreReusedRegistersByPopping(jit, preservedState);
+
+                        params.doneJumps.append(jit.jump());
+
+                        needToFlush.link(&amp;jit);
+                        callOperation(
+                            usedRegisters, jit, params.lazySlowPath-&gt;callSiteIndex(),
+                            params.exceptionJumps, operationFlushWriteBarrierBuffer, InvalidGPRReg,
+                            baseGPR);
+                        scratchRegisterAllocator.restoreReusedRegistersByPopping(jit, preservedState);
+                        params.doneJumps.append(jit.jump());
+                    });
+            },
+            base);
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+    }
+
+    template&lt;typename... Args&gt;
+    LValue vmCall(LType type, LValue function, Args... args)
+    {
+        callPreflight();
+        LValue result = m_out.call(type, function, args...);
+        callCheck();
+        return result;
+    }
+    
+    void callPreflight(CodeOrigin codeOrigin)
+    {
+        CallSiteIndex callSiteIndex = m_ftlState.jitCode-&gt;common.addCodeOrigin(codeOrigin);
+        m_out.store32(
+            m_out.constInt32(callSiteIndex.bits()),
+            tagFor(JSStack::ArgumentCount));
+    }
+
+    void callPreflight()
+    {
+        callPreflight(codeOriginDescriptionOfCallSite());
+    }
+
+    CodeOrigin codeOriginDescriptionOfCallSite() const
+    {
+        CodeOrigin codeOrigin = m_node-&gt;origin.semantic;
+        if (m_node-&gt;op() == TailCallInlinedCaller
+            || m_node-&gt;op() == TailCallVarargsInlinedCaller
+            || m_node-&gt;op() == TailCallForwardVarargsInlinedCaller) {
+            // This case arises when you have a situation like this:
+            // foo makes a call to bar, bar is inlined in foo. bar makes a call
+            // to baz and baz is inlined in bar. And then baz makes a tail-call to jaz,
+            // and jaz is inlined in baz. We want the callframe for jaz to appear to 
+            // have caller be bar.
+            codeOrigin = *codeOrigin.inlineCallFrame-&gt;getCallerSkippingTailCalls();
+        }
+
+        return codeOrigin;
+    }
+    
+    void callCheck()
+    {
+        if (Options::useExceptionFuzz())
+            m_out.call(m_out.voidType, m_out.operation(operationExceptionFuzz), m_callFrame);
+        
+        LValue exception = m_out.load64(m_out.absolute(vm().addressOfException()));
+        LValue hadException = m_out.notZero64(exception);
+
+        CodeOrigin opCatchOrigin;
+        HandlerInfo* exceptionHandler;
+        if (m_graph.willCatchExceptionInMachineFrame(m_origin.forExit, opCatchOrigin, exceptionHandler)) {
+            bool exitOK = true;
+            bool isExceptionHandler = true;
+            appendOSRExit(
+                ExceptionCheck, noValue(), nullptr, hadException,
+                m_origin.withForExitAndExitOK(opCatchOrigin, exitOK), isExceptionHandler);
+            return;
+        }
+
+        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Exception check continuation&quot;));
+
+        m_out.branch(
+            hadException, rarely(m_handleExceptions), usually(continuation));
+
+        m_out.appendTo(continuation);
+    }
+
+    RefPtr&lt;PatchpointExceptionHandle&gt; preparePatchpointForExceptions(PatchpointValue* value)
+    {
+        CodeOrigin opCatchOrigin;
+        HandlerInfo* exceptionHandler;
+        bool willCatchException = m_graph.willCatchExceptionInMachineFrame(m_origin.forExit, opCatchOrigin, exceptionHandler);
+        if (!willCatchException)
+            return PatchpointExceptionHandle::defaultHandle(m_ftlState);
+
+        if (verboseCompilationEnabled()) {
+            dataLog(&quot;    Patchpoint exception OSR exit #&quot;, m_ftlState.jitCode-&gt;osrExitDescriptors.size(), &quot; with availability: &quot;, availabilityMap(), &quot;\n&quot;);
+            if (!m_availableRecoveries.isEmpty())
+                dataLog(&quot;        Available recoveries: &quot;, listDump(m_availableRecoveries), &quot;\n&quot;);
+        }
+
+        bool exitOK = true;
+        NodeOrigin origin = m_origin.withForExitAndExitOK(opCatchOrigin, exitOK);
+
+        OSRExitDescriptor* exitDescriptor = appendOSRExitDescriptor(noValue(), nullptr);
+
+        // Compute the offset into the StackmapGenerationParams where we will find the exit arguments
+        // we are about to append. We need to account for both the children we've already added, and
+        // for the possibility of a result value if the patchpoint is not void.
+        unsigned offset = value-&gt;numChildren();
+        if (value-&gt;type() != Void)
+            offset++;
+
+        // Use LateColdAny to ensure that the stackmap arguments interfere with the patchpoint's
+        // result and with any late-clobbered registers.
+        value-&gt;appendVectorWithRep(
+            buildExitArguments(exitDescriptor, opCatchOrigin, noValue()),
+            ValueRep::LateColdAny);
+
+        return PatchpointExceptionHandle::create(
+            m_ftlState, exitDescriptor, origin, offset, *exceptionHandler);
+    }
+
+    LBasicBlock lowBlock(DFG::BasicBlock* block)
+    {
+        return m_blocks.get(block);
+    }
+
+    OSRExitDescriptor* appendOSRExitDescriptor(FormattedValue lowValue, Node* highValue)
+    {
+        return &amp;m_ftlState.jitCode-&gt;osrExitDescriptors.alloc(
+            lowValue.format(), m_graph.methodOfGettingAValueProfileFor(highValue),
+            availabilityMap().m_locals.numberOfArguments(),
+            availabilityMap().m_locals.numberOfLocals());
+    }
+    
+    void appendOSRExit(
+        ExitKind kind, FormattedValue lowValue, Node* highValue, LValue failCondition, 
+        NodeOrigin origin, bool isExceptionHandler = false)
+    {
+        if (verboseCompilationEnabled()) {
+            dataLog(&quot;    OSR exit #&quot;, m_ftlState.jitCode-&gt;osrExitDescriptors.size(), &quot; with availability: &quot;, availabilityMap(), &quot;\n&quot;);
+            if (!m_availableRecoveries.isEmpty())
+                dataLog(&quot;        Available recoveries: &quot;, listDump(m_availableRecoveries), &quot;\n&quot;);
+        }
+
+        DFG_ASSERT(m_graph, m_node, origin.exitOK);
+        
+        if (doOSRExitFuzzing() &amp;&amp; !isExceptionHandler) {
+            LValue numberOfFuzzChecks = m_out.add(
+                m_out.load32(m_out.absolute(&amp;g_numberOfOSRExitFuzzChecks)),
+                m_out.int32One);
+            
+            m_out.store32(numberOfFuzzChecks, m_out.absolute(&amp;g_numberOfOSRExitFuzzChecks));
+            
+            if (unsigned atOrAfter = Options::fireOSRExitFuzzAtOrAfter()) {
+                failCondition = m_out.bitOr(
+                    failCondition,
+                    m_out.aboveOrEqual(numberOfFuzzChecks, m_out.constInt32(atOrAfter)));
+            }
+            if (unsigned at = Options::fireOSRExitFuzzAt()) {
+                failCondition = m_out.bitOr(
+                    failCondition,
+                    m_out.equal(numberOfFuzzChecks, m_out.constInt32(at)));
+            }
+        }
+
+        if (failCondition == m_out.booleanFalse)
+            return;
+
+        blessSpeculation(
+            m_out.speculate(failCondition), kind, lowValue, highValue, origin);
+    }
+
+    void blessSpeculation(CheckValue* value, ExitKind kind, FormattedValue lowValue, Node* highValue, NodeOrigin origin)
+    {
+        OSRExitDescriptor* exitDescriptor = appendOSRExitDescriptor(lowValue, highValue);
+        
+        value-&gt;appendColdAnys(buildExitArguments(exitDescriptor, origin.forExit, lowValue));
+
+        State* state = &amp;m_ftlState;
+        value-&gt;setGenerator(
+            [=] (CCallHelpers&amp; jit, const B3::StackmapGenerationParams&amp; params) {
+                exitDescriptor-&gt;emitOSRExit(
+                    *state, kind, origin, jit, params, 0);
+            });
+    }
+
+    StackmapArgumentList buildExitArguments(
+        OSRExitDescriptor* exitDescriptor, CodeOrigin exitOrigin, FormattedValue lowValue,
+        unsigned offsetOfExitArgumentsInStackmapLocations = 0)
+    {
+        StackmapArgumentList result;
+        buildExitArguments(
+            exitDescriptor, exitOrigin, result, lowValue, offsetOfExitArgumentsInStackmapLocations);
+        return result;
+    }
+    
+    void buildExitArguments(
+        OSRExitDescriptor* exitDescriptor, CodeOrigin exitOrigin, StackmapArgumentList&amp; arguments, FormattedValue lowValue,
+        unsigned offsetOfExitArgumentsInStackmapLocations = 0)
+    {
+        if (!!lowValue)
+            arguments.append(lowValue.value());
+        
+        AvailabilityMap availabilityMap = this-&gt;availabilityMap();
+        availabilityMap.pruneByLiveness(m_graph, exitOrigin);
+        
+        HashMap&lt;Node*, ExitTimeObjectMaterialization*&gt; map;
+        availabilityMap.forEachAvailability(
+            [&amp;] (Availability availability) {
+                if (!availability.shouldUseNode())
+                    return;
+                
+                Node* node = availability.node();
+                if (!node-&gt;isPhantomAllocation())
+                    return;
+                
+                auto result = map.add(node, nullptr);
+                if (result.isNewEntry) {
+                    result.iterator-&gt;value =
+                        exitDescriptor-&gt;m_materializations.add(node-&gt;op(), node-&gt;origin.semantic);
+                }
+            });
+        
+        for (unsigned i = 0; i &lt; exitDescriptor-&gt;m_values.size(); ++i) {
+            int operand = exitDescriptor-&gt;m_values.operandForIndex(i);
+            
+            Availability availability = availabilityMap.m_locals[i];
+            
+            if (Options::validateFTLOSRExitLiveness()) {
+                DFG_ASSERT(
+                    m_graph, m_node,
+                    (!(availability.isDead() &amp;&amp; m_graph.isLiveInBytecode(VirtualRegister(operand), exitOrigin))) || m_graph.m_plan.mode == FTLForOSREntryMode);
+            }
+            ExitValue exitValue = exitValueForAvailability(arguments, map, availability);
+            if (exitValue.hasIndexInStackmapLocations())
+                exitValue.adjustStackmapLocationsIndexByOffset(offsetOfExitArgumentsInStackmapLocations);
+            exitDescriptor-&gt;m_values[i] = exitValue;
+        }
+        
+        for (auto heapPair : availabilityMap.m_heap) {
+            Node* node = heapPair.key.base();
+            ExitTimeObjectMaterialization* materialization = map.get(node);
+            ExitValue exitValue = exitValueForAvailability(arguments, map, heapPair.value);
+            if (exitValue.hasIndexInStackmapLocations())
+                exitValue.adjustStackmapLocationsIndexByOffset(offsetOfExitArgumentsInStackmapLocations);
+            materialization-&gt;add(
+                heapPair.key.descriptor(),
+                exitValue);
+        }
+        
+        if (verboseCompilationEnabled()) {
+            dataLog(&quot;        Exit values: &quot;, exitDescriptor-&gt;m_values, &quot;\n&quot;);
+            if (!exitDescriptor-&gt;m_materializations.isEmpty()) {
+                dataLog(&quot;        Materializations: \n&quot;);
+                for (ExitTimeObjectMaterialization* materialization : exitDescriptor-&gt;m_materializations)
+                    dataLog(&quot;            &quot;, pointerDump(materialization), &quot;\n&quot;);
+            }
+        }
+    }
+
+    ExitValue exitValueForAvailability(
+        StackmapArgumentList&amp; arguments, const HashMap&lt;Node*, ExitTimeObjectMaterialization*&gt;&amp; map,
+        Availability availability)
+    {
+        FlushedAt flush = availability.flushedAt();
+        switch (flush.format()) {
+        case DeadFlush:
+        case ConflictingFlush:
+            if (availability.hasNode())
+                return exitValueForNode(arguments, map, availability.node());
+            
+            // This means that the value is dead. It could be dead in bytecode or it could have
+            // been killed by our DCE, which can sometimes kill things even if they were live in
+            // bytecode.
+            return ExitValue::dead();
+
+        case FlushedJSValue:
+        case FlushedCell:
+        case FlushedBoolean:
+            return ExitValue::inJSStack(flush.virtualRegister());
+                
+        case FlushedInt32:
+            return ExitValue::inJSStackAsInt32(flush.virtualRegister());
+                
+        case FlushedInt52:
+            return ExitValue::inJSStackAsInt52(flush.virtualRegister());
+                
+        case FlushedDouble:
+            return ExitValue::inJSStackAsDouble(flush.virtualRegister());
+        }
+        
+        DFG_CRASH(m_graph, m_node, &quot;Invalid flush format&quot;);
+        return ExitValue::dead();
+    }
+    
+    ExitValue exitValueForNode(
+        StackmapArgumentList&amp; arguments, const HashMap&lt;Node*, ExitTimeObjectMaterialization*&gt;&amp; map,
+        Node* node)
+    {
+        // NOTE: In FTL-&gt;B3, we cannot generate code here, because m_output is positioned after the
+        // stackmap value. Like all values, the stackmap value cannot use a child that is defined after
+        // it.
+        
+        ASSERT(node-&gt;shouldGenerate());
+        ASSERT(node-&gt;hasResult());
+
+        if (node) {
+            switch (node-&gt;op()) {
+            case BottomValue:
+                // This might arise in object materializations. I actually doubt that it would,
+                // but it seems worthwhile to be conservative.
+                return ExitValue::dead();
+                
+            case JSConstant:
+            case Int52Constant:
+            case DoubleConstant:
+                return ExitValue::constant(node-&gt;asJSValue());
+                
+            default:
+                if (node-&gt;isPhantomAllocation())
+                    return ExitValue::materializeNewObject(map.get(node));
+                break;
+            }
+        }
+        
+        for (unsigned i = 0; i &lt; m_availableRecoveries.size(); ++i) {
+            AvailableRecovery recovery = m_availableRecoveries[i];
+            if (recovery.node() != node)
+                continue;
+            ExitValue result = ExitValue::recovery(
+                recovery.opcode(), arguments.size(), arguments.size() + 1,
+                recovery.format());
+            arguments.append(recovery.left());
+            arguments.append(recovery.right());
+            return result;
+        }
+        
+        LoweredNodeValue value = m_int32Values.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatInt32, value.value());
+        
+        value = m_int52Values.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatInt52, value.value());
+        
+        value = m_strictInt52Values.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatStrictInt52, value.value());
+        
+        value = m_booleanValues.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatBoolean, value.value());
+        
+        value = m_jsValueValues.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatJS, value.value());
+        
+        value = m_doubleValues.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatDouble, value.value());
+
+        DFG_CRASH(m_graph, m_node, toCString(&quot;Cannot find value for node: &quot;, node).data());
+        return ExitValue::dead();
+    }
+
+    ExitValue exitArgument(StackmapArgumentList&amp; arguments, DataFormat format, LValue value)
+    {
+        ExitValue result = ExitValue::exitArgument(ExitArgument(format, arguments.size()));
+        arguments.append(value);
+        return result;
+    }
+
+    ExitValue exitValueForTailCall(StackmapArgumentList&amp; arguments, Node* node)
+    {
+        ASSERT(node-&gt;shouldGenerate());
+        ASSERT(node-&gt;hasResult());
+
+        switch (node-&gt;op()) {
+        case JSConstant:
+        case Int52Constant:
+        case DoubleConstant:
+            return ExitValue::constant(node-&gt;asJSValue());
+
+        default:
+            break;
+        }
+
+        LoweredNodeValue value = m_jsValueValues.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatJS, value.value());
+
+        value = m_int32Values.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatJS, boxInt32(value.value()));
+
+        value = m_booleanValues.get(node);
+        if (isValid(value))
+            return exitArgument(arguments, DataFormatJS, boxBoolean(value.value()));
+
+        // Doubles and Int52 have been converted by ValueRep()
+        DFG_CRASH(m_graph, m_node, toCString(&quot;Cannot find value for node: &quot;, node).data());
+    }
+    
+    bool doesKill(Edge edge)
+    {
+        if (edge.doesNotKill())
+            return false;
+        
+        if (edge-&gt;hasConstant())
+            return false;
+        
+        return true;
+    }
+
+    void addAvailableRecovery(
+        Node* node, RecoveryOpcode opcode, LValue left, LValue right, DataFormat format)
+    {
+        m_availableRecoveries.append(AvailableRecovery(node, opcode, left, right, format));
+    }
+    
+    void addAvailableRecovery(
+        Edge edge, RecoveryOpcode opcode, LValue left, LValue right, DataFormat format)
+    {
+        addAvailableRecovery(edge.node(), opcode, left, right, format);
+    }
+    
+    void setInt32(Node* node, LValue value)
+    {
+        m_int32Values.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+    void setInt52(Node* node, LValue value)
+    {
+        m_int52Values.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+    void setStrictInt52(Node* node, LValue value)
+    {
+        m_strictInt52Values.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+    void setInt52(Node* node, LValue value, Int52Kind kind)
+    {
+        switch (kind) {
+        case Int52:
+            setInt52(node, value);
+            return;
+            
+        case StrictInt52:
+            setStrictInt52(node, value);
+            return;
+        }
+        
+        DFG_CRASH(m_graph, m_node, &quot;Corrupt int52 kind&quot;);
+    }
+    void setJSValue(Node* node, LValue value)
+    {
+        m_jsValueValues.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+    void setBoolean(Node* node, LValue value)
+    {
+        m_booleanValues.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+    void setStorage(Node* node, LValue value)
+    {
+        m_storageValues.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+    void setDouble(Node* node, LValue value)
+    {
+        m_doubleValues.set(node, LoweredNodeValue(value, m_highBlock));
+    }
+
+    void setInt32(LValue value)
+    {
+        setInt32(m_node, value);
+    }
+    void setInt52(LValue value)
+    {
+        setInt52(m_node, value);
+    }
+    void setStrictInt52(LValue value)
+    {
+        setStrictInt52(m_node, value);
+    }
+    void setInt52(LValue value, Int52Kind kind)
+    {
+        setInt52(m_node, value, kind);
+    }
+    void setJSValue(LValue value)
+    {
+        setJSValue(m_node, value);
+    }
+    void setBoolean(LValue value)
+    {
+        setBoolean(m_node, value);
+    }
+    void setStorage(LValue value)
+    {
+        setStorage(m_node, value);
+    }
+    void setDouble(LValue value)
+    {
+        setDouble(m_node, value);
+    }
+    
+    bool isValid(const LoweredNodeValue&amp; value)
+    {
+        if (!value)
+            return false;
+        if (!m_graph.m_dominators-&gt;dominates(value.block(), m_highBlock))
+            return false;
+        return true;
+    }
+    
+    void addWeakReference(JSCell* target)
+    {
+        m_graph.m_plan.weakReferences.addLazily(target);
+    }
+
+    LValue loadStructure(LValue value)
+    {
+        LValue tableIndex = m_out.load32(value, m_heaps.JSCell_structureID);
+        LValue tableBase = m_out.loadPtr(
+            m_out.absolute(vm().heap.structureIDTable().base()));
+        TypedPointer address = m_out.baseIndex(
+            m_heaps.structureTable, tableBase, m_out.zeroExtPtr(tableIndex));
+        return m_out.loadPtr(address);
+    }
+
+    LValue weakPointer(JSCell* pointer)
+    {
+        addWeakReference(pointer);
+        return m_out.constIntPtr(pointer);
+    }
+
+    LValue weakStructureID(Structure* structure)
+    {
+        addWeakReference(structure);
+        return m_out.constInt32(structure-&gt;id());
+    }
+    
+    LValue weakStructure(Structure* structure)
+    {
+        return weakPointer(structure);
+    }
+    
+    TypedPointer addressFor(LValue base, int operand, ptrdiff_t offset = 0)
+    {
+        return m_out.address(base, m_heaps.variables[operand], offset);
+    }
+    TypedPointer payloadFor(LValue base, int operand)
+    {
+        return addressFor(base, operand, PayloadOffset);
+    }
+    TypedPointer tagFor(LValue base, int operand)
+    {
+        return addressFor(base, operand, TagOffset);
+    }
+    TypedPointer addressFor(int operand, ptrdiff_t offset = 0)
+    {
+        return addressFor(VirtualRegister(operand), offset);
+    }
+    TypedPointer addressFor(VirtualRegister operand, ptrdiff_t offset = 0)
+    {
+        if (operand.isLocal())
+            return addressFor(m_captured, operand.offset(), offset);
+        return addressFor(m_callFrame, operand.offset(), offset);
+    }
+    TypedPointer payloadFor(int operand)
+    {
+        return payloadFor(VirtualRegister(operand));
+    }
+    TypedPointer payloadFor(VirtualRegister operand)
+    {
+        return addressFor(operand, PayloadOffset);
+    }
+    TypedPointer tagFor(int operand)
+    {
+        return tagFor(VirtualRegister(operand));
+    }
+    TypedPointer tagFor(VirtualRegister operand)
+    {
+        return addressFor(operand, TagOffset);
+    }
+    
+    AbstractValue abstractValue(Node* node)
+    {
+        return m_state.forNode(node);
+    }
+    AbstractValue abstractValue(Edge edge)
+    {
+        return abstractValue(edge.node());
+    }
+    
+    SpeculatedType provenType(Node* node)
+    {
+        return abstractValue(node).m_type;
+    }
+    SpeculatedType provenType(Edge edge)
+    {
+        return provenType(edge.node());
+    }
+    
+    JSValue provenValue(Node* node)
+    {
+        return abstractValue(node).m_value;
+    }
+    JSValue provenValue(Edge edge)
+    {
+        return provenValue(edge.node());
+    }
+    
+    StructureAbstractValue abstractStructure(Node* node)
+    {
+        return abstractValue(node).m_structure;
+    }
+    StructureAbstractValue abstractStructure(Edge edge)
+    {
+        return abstractStructure(edge.node());
+    }
+    
+#if ENABLE(MASM_PROBE)
+    void probe(std::function&lt;void (CCallHelpers::ProbeContext*)&gt; probeFunc)
+    {
+        UNUSED_PARAM(probeFunc);
+    }
+#endif
+
+    void crash()
+    {
+        crash(m_highBlock-&gt;index, m_node-&gt;index());
+    }
+    void crash(BlockIndex blockIndex, unsigned nodeIndex)
+    {
+#if ASSERT_DISABLED
+        m_out.call(m_out.voidType, m_out.operation(ftlUnreachable));
+        UNUSED_PARAM(blockIndex);
+        UNUSED_PARAM(nodeIndex);
+#else
+        m_out.call(
+            m_out.voidType,
+            m_out.constIntPtr(ftlUnreachable),
+            m_out.constIntPtr(codeBlock()), m_out.constInt32(blockIndex),
+            m_out.constInt32(nodeIndex));
+#endif
+        m_out.unreachable();
+    }
+
+    AvailabilityMap&amp; availabilityMap() { return m_availabilityCalculator.m_availability; }
+    
+    VM&amp; vm() { return m_graph.m_vm; }
+    CodeBlock* codeBlock() { return m_graph.m_codeBlock; }
+    
+    Graph&amp; m_graph;
+    State&amp; m_ftlState;
+    AbstractHeapRepository m_heaps;
+    Output m_out;
+    Procedure&amp; m_proc;
+    
+    LBasicBlock m_prologue;
+    LBasicBlock m_handleExceptions;
+    HashMap&lt;DFG::BasicBlock*, LBasicBlock&gt; m_blocks;
+    
+    LValue m_callFrame;
+    LValue m_captured;
+    LValue m_tagTypeNumber;
+    LValue m_tagMask;
+    
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_int32Values;
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_strictInt52Values;
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_int52Values;
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_jsValueValues;
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_booleanValues;
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_storageValues;
+    HashMap&lt;Node*, LoweredNodeValue&gt; m_doubleValues;
+    
+    // This is a bit of a hack. It prevents B3 from having to do CSE on loading of arguments.
+    // It's nice to have these optimizations on our end because we can guarantee them a bit better.
+    // Probably also saves B3 compile time.
+    HashMap&lt;Node*, LValue&gt; m_loadedArgumentValues;
+    
+    HashMap&lt;Node*, LValue&gt; m_phis;
+    
+    LocalOSRAvailabilityCalculator m_availabilityCalculator;
+    
+    Vector&lt;AvailableRecovery, 3&gt; m_availableRecoveries;
+    
+    InPlaceAbstractState m_state;
+    AbstractInterpreter&lt;InPlaceAbstractState&gt; m_interpreter;
+    DFG::BasicBlock* m_highBlock;
+    DFG::BasicBlock* m_nextHighBlock;
+    LBasicBlock m_nextLowBlock;
+
+    NodeOrigin m_origin;
+    unsigned m_nodeIndex;
+    Node* m_node;
+};
+
+} // anonymous namespace
+
+void lowerDFGToB3(State&amp; state)
+{
+    LowerDFGToB3 lowering(state);
+    lowering.lower();
+}
+
+} } // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
+
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLowerDFGToB3hfromrev196731trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMh"></a>
<div class="copfile"><h4>Copied: trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.h (from rev 196731, trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h) (0 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.h                                (rev 0)
+++ trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -0,0 +1,43 @@
</span><ins>+/*
+ * Copyright (C) 2013 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef FTLLowerDFGToB3_h
+#define FTLLowerDFGToB3_h
+
+#if ENABLE(FTL_JIT)
+
+#include &quot;DFGGraph.h&quot;
+#include &quot;FTLState.h&quot;
+
+namespace JSC { namespace FTL {
+
+void lowerDFGToB3(State&amp;);
+
+} } // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
+
+#endif // FTLLowerDFGToB3_h
+
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMcpp"></a>
<div class="delfile"><h4>Deleted: trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -1,10461 +0,0 @@
</span><del>-/*
- * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include &quot;config.h&quot;
-#include &quot;FTLLowerDFGToLLVM.h&quot;
-
-#if ENABLE(FTL_JIT)
-
-#include &quot;AirGenerationContext.h&quot;
-#include &quot;AllowMacroScratchRegisterUsage.h&quot;
-#include &quot;B3StackmapGenerationParams.h&quot;
-#include &quot;CallFrameShuffler.h&quot;
-#include &quot;CodeBlockWithJITType.h&quot;
-#include &quot;DFGAbstractInterpreterInlines.h&quot;
-#include &quot;DFGDominators.h&quot;
-#include &quot;DFGInPlaceAbstractState.h&quot;
-#include &quot;DFGOSRAvailabilityAnalysisPhase.h&quot;
-#include &quot;DFGOSRExitFuzz.h&quot;
-#include &quot;DirectArguments.h&quot;
-#include &quot;FTLAbstractHeapRepository.h&quot;
-#include &quot;FTLAvailableRecovery.h&quot;
-#include &quot;FTLExceptionTarget.h&quot;
-#include &quot;FTLForOSREntryJITCode.h&quot;
-#include &quot;FTLFormattedValue.h&quot;
-#include &quot;FTLLazySlowPathCall.h&quot;
-#include &quot;FTLLoweredNodeValue.h&quot;
-#include &quot;FTLOperations.h&quot;
-#include &quot;FTLOutput.h&quot;
-#include &quot;FTLPatchpointExceptionHandle.h&quot;
-#include &quot;FTLThunks.h&quot;
-#include &quot;FTLWeightedTarget.h&quot;
-#include &quot;JITAddGenerator.h&quot;
-#include &quot;JITBitAndGenerator.h&quot;
-#include &quot;JITBitOrGenerator.h&quot;
-#include &quot;JITBitXorGenerator.h&quot;
-#include &quot;JITDivGenerator.h&quot;
-#include &quot;JITInlineCacheGenerator.h&quot;
-#include &quot;JITLeftShiftGenerator.h&quot;
-#include &quot;JITMulGenerator.h&quot;
-#include &quot;JITRightShiftGenerator.h&quot;
-#include &quot;JITSubGenerator.h&quot;
-#include &quot;JSCInlines.h&quot;
-#include &quot;JSGeneratorFunction.h&quot;
-#include &quot;JSLexicalEnvironment.h&quot;
-#include &quot;OperandsInlines.h&quot;
-#include &quot;ScopedArguments.h&quot;
-#include &quot;ScopedArgumentsTable.h&quot;
-#include &quot;ScratchRegisterAllocator.h&quot;
-#include &quot;SetupVarargsFrame.h&quot;
-#include &quot;VirtualRegister.h&quot;
-#include &quot;Watchdog.h&quot;
-#include &lt;atomic&gt;
-#if !OS(WINDOWS)
-#include &lt;dlfcn.h&gt;
-#endif
-#include &lt;unordered_set&gt;
-#include &lt;wtf/Box.h&gt;
-#include &lt;wtf/ProcessID.h&gt;
-
-namespace JSC { namespace FTL {
-
-using namespace B3;
-using namespace DFG;
-
-namespace {
-
-std::atomic&lt;int&gt; compileCounter;
-
-#if ASSERT_DISABLED
-NO_RETURN_DUE_TO_CRASH static void ftlUnreachable()
-{
-    CRASH();
-}
-#else
-NO_RETURN_DUE_TO_CRASH static void ftlUnreachable(
-    CodeBlock* codeBlock, BlockIndex blockIndex, unsigned nodeIndex)
-{
-    dataLog(&quot;Crashing in thought-to-be-unreachable FTL-generated code for &quot;, pointerDump(codeBlock), &quot; at basic block #&quot;, blockIndex);
-    if (nodeIndex != UINT_MAX)
-        dataLog(&quot;, node @&quot;, nodeIndex);
-    dataLog(&quot;.\n&quot;);
-    CRASH();
-}
-#endif
-
-// Using this instead of typeCheck() helps to reduce the load on LLVM, by creating
-// significantly less dead code.
-#define FTL_TYPE_CHECK_WITH_EXIT_KIND(exitKind, lowValue, highValue, typesPassedThrough, failCondition) do { \
-        FormattedValue _ftc_lowValue = (lowValue);                      \
-        Edge _ftc_highValue = (highValue);                              \
-        SpeculatedType _ftc_typesPassedThrough = (typesPassedThrough);  \
-        if (!m_interpreter.needsTypeCheck(_ftc_highValue, _ftc_typesPassedThrough)) \
-            break;                                                      \
-        typeCheck(_ftc_lowValue, _ftc_highValue, _ftc_typesPassedThrough, (failCondition), exitKind); \
-    } while (false)
-
-#define FTL_TYPE_CHECK(lowValue, highValue, typesPassedThrough, failCondition) \
-    FTL_TYPE_CHECK_WITH_EXIT_KIND(BadType, lowValue, highValue, typesPassedThrough, failCondition)
-
-class LowerDFGToLLVM {
-    WTF_MAKE_NONCOPYABLE(LowerDFGToLLVM);
-public:
-    LowerDFGToLLVM(State&amp; state)
-        : m_graph(state.graph)
-        , m_ftlState(state)
-        , m_out(state)
-        , m_proc(*state.proc)
-        , m_state(state.graph)
-        , m_interpreter(state.graph, m_state)
-    {
-    }
-    
-    void lower()
-    {
-        State* state = &amp;m_ftlState;
-        
-        CString name;
-        if (verboseCompilationEnabled()) {
-            name = toCString(
-                &quot;jsBody_&quot;, ++compileCounter, &quot;_&quot;, codeBlock()-&gt;inferredName(),
-                &quot;_&quot;, codeBlock()-&gt;hash());
-        } else
-            name = &quot;jsBody&quot;;
-        
-        m_graph.ensureDominators();
-
-        if (verboseCompilationEnabled())
-            dataLog(&quot;Function ready, beginning lowering.\n&quot;);
-
-        m_out.initialize(m_heaps);
-
-        // We use prologue frequency for all of the initialization code.
-        m_out.setFrequency(1);
-        
-        m_prologue = FTL_NEW_BLOCK(m_out, (&quot;Prologue&quot;));
-        LBasicBlock stackOverflow = FTL_NEW_BLOCK(m_out, (&quot;Stack overflow&quot;));
-        m_handleExceptions = FTL_NEW_BLOCK(m_out, (&quot;Handle Exceptions&quot;));
-        
-        LBasicBlock checkArguments = FTL_NEW_BLOCK(m_out, (&quot;Check arguments&quot;));
-
-        for (BlockIndex blockIndex = 0; blockIndex &lt; m_graph.numBlocks(); ++blockIndex) {
-            m_highBlock = m_graph.block(blockIndex);
-            if (!m_highBlock)
-                continue;
-            m_out.setFrequency(m_highBlock-&gt;executionCount);
-            m_blocks.add(m_highBlock, FTL_NEW_BLOCK(m_out, (&quot;Block &quot;, *m_highBlock)));
-        }
-
-        // Back to prologue frequency for any bocks that get sneakily created in the initialization code.
-        m_out.setFrequency(1);
-        
-        m_out.appendTo(m_prologue, stackOverflow);
-        m_out.initializeConstants(m_proc, m_prologue);
-        createPhiVariables();
-
-        size_t sizeOfCaptured = sizeof(JSValue) * m_graph.m_nextMachineLocal;
-        B3::SlotBaseValue* capturedBase = m_out.lockedStackSlot(sizeOfCaptured);
-        m_captured = m_out.add(capturedBase, m_out.constIntPtr(sizeOfCaptured));
-        state-&gt;capturedValue = capturedBase-&gt;slot();
-
-        auto preOrder = m_graph.blocksInPreOrder();
-
-        // We should not create any alloca's after this point, since they will cease to
-        // be mem2reg candidates.
-        
-        m_callFrame = m_out.framePointer();
-        m_tagTypeNumber = m_out.constInt64(TagTypeNumber);
-        m_tagMask = m_out.constInt64(TagMask);
-
-        // Make sure that B3 knows that we really care about the mask registers. This forces the
-        // constants to be materialized in registers.
-        m_proc.addFastConstant(m_tagTypeNumber-&gt;key());
-        m_proc.addFastConstant(m_tagMask-&gt;key());
-        
-        m_out.storePtr(m_out.constIntPtr(codeBlock()), addressFor(JSStack::CodeBlock));
-        
-        m_out.branch(
-            didOverflowStack(), rarely(stackOverflow), usually(checkArguments));
-        
-        m_out.appendTo(stackOverflow, m_handleExceptions);
-        m_out.call(m_out.voidType, m_out.operation(operationThrowStackOverflowError), m_callFrame, m_out.constIntPtr(codeBlock()));
-        m_out.patchpoint(Void)-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp;) {
-                // We are terminal, so we can clobber everything. That's why we don't claim to
-                // clobber scratch.
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-                
-                jit.copyCalleeSavesToVMCalleeSavesBuffer();
-                jit.move(CCallHelpers::TrustedImmPtr(jit.vm()), GPRInfo::argumentGPR0);
-                jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
-                CCallHelpers::Call call = jit.call();
-                jit.jumpToExceptionHandler();
-
-                jit.addLinkTask(
-                    [=] (LinkBuffer&amp; linkBuffer) {
-                        linkBuffer.link(call, FunctionPtr(lookupExceptionHandlerFromCallerFrame));
-                    });
-            });
-        m_out.unreachable();
-        
-        m_out.appendTo(m_handleExceptions, checkArguments);
-        Box&lt;CCallHelpers::Label&gt; exceptionHandler = state-&gt;exceptionHandler;
-        m_out.patchpoint(Void)-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp;) {
-                CCallHelpers::Jump jump = jit.jump();
-                jit.addLinkTask(
-                    [=] (LinkBuffer&amp; linkBuffer) {
-                        linkBuffer.link(jump, linkBuffer.locationOf(*exceptionHandler));
-                    });
-            });
-        m_out.unreachable();
-        
-        m_out.appendTo(checkArguments, lowBlock(m_graph.block(0)));
-        availabilityMap().clear();
-        availabilityMap().m_locals = Operands&lt;Availability&gt;(codeBlock()-&gt;numParameters(), 0);
-        for (unsigned i = codeBlock()-&gt;numParameters(); i--;) {
-            availabilityMap().m_locals.argument(i) =
-                Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgument(i)));
-        }
-        m_node = nullptr;
-        m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
-        for (unsigned i = codeBlock()-&gt;numParameters(); i--;) {
-            Node* node = m_graph.m_arguments[i];
-            VirtualRegister operand = virtualRegisterForArgument(i);
-            
-            LValue jsValue = m_out.load64(addressFor(operand));
-            
-            if (node) {
-                DFG_ASSERT(m_graph, node, operand == node-&gt;stackAccessData()-&gt;machineLocal);
-                
-                // This is a hack, but it's an effective one. It allows us to do CSE on the
-                // primordial load of arguments. This assumes that the GetLocal that got put in
-                // place of the original SetArgument doesn't have any effects before it. This
-                // should hold true.
-                m_loadedArgumentValues.add(node, jsValue);
-            }
-            
-            switch (m_graph.m_argumentFormats[i]) {
-            case FlushedInt32:
-                speculate(BadType, jsValueValue(jsValue), node, isNotInt32(jsValue));
-                break;
-            case FlushedBoolean:
-                speculate(BadType, jsValueValue(jsValue), node, isNotBoolean(jsValue));
-                break;
-            case FlushedCell:
-                speculate(BadType, jsValueValue(jsValue), node, isNotCell(jsValue));
-                break;
-            case FlushedJSValue:
-                break;
-            default:
-                DFG_CRASH(m_graph, node, &quot;Bad flush format for argument&quot;);
-                break;
-            }
-        }
-        m_out.jump(lowBlock(m_graph.block(0)));
-        
-        for (DFG::BasicBlock* block : preOrder)
-            compileBlock(block);
-
-        // We create all Phi's up front, but we may then decide not to compile the basic block
-        // that would have contained one of them. So this creates orphans, which triggers B3
-        // validation failures. Calling this fixes the issue.
-        //
-        // Note that you should avoid the temptation to make this call conditional upon
-        // validation being enabled. B3 makes no guarantees of any kind of correctness when
-        // dealing with IR that would have failed validation. For example, it would be valid to
-        // write a B3 phase that so aggressively assumes the lack of orphans that it would crash
-        // if any orphans were around. We might even have such phases already.
-        m_proc.deleteOrphans();
-
-        // We put the blocks into the B3 procedure in a super weird order. Now we reorder them.
-        m_out.applyBlockOrder();
-    }
-
-private:
-    
-    void createPhiVariables()
-    {
-        for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
-            DFG::BasicBlock* block = m_graph.block(blockIndex);
-            if (!block)
-                continue;
-            for (unsigned nodeIndex = block-&gt;size(); nodeIndex--;) {
-                Node* node = block-&gt;at(nodeIndex);
-                if (node-&gt;op() != DFG::Phi)
-                    continue;
-                LType type;
-                switch (node-&gt;flags() &amp; NodeResultMask) {
-                case NodeResultDouble:
-                    type = m_out.doubleType;
-                    break;
-                case NodeResultInt32:
-                    type = m_out.int32;
-                    break;
-                case NodeResultInt52:
-                    type = m_out.int64;
-                    break;
-                case NodeResultBoolean:
-                    type = m_out.boolean;
-                    break;
-                case NodeResultJS:
-                    type = m_out.int64;
-                    break;
-                default:
-                    DFG_CRASH(m_graph, node, &quot;Bad Phi node result type&quot;);
-                    break;
-                }
-                m_phis.add(node, m_proc.add&lt;Value&gt;(B3::Phi, type, Origin(node)));
-            }
-        }
-    }
-    
-    void compileBlock(DFG::BasicBlock* block)
-    {
-        if (!block)
-            return;
-        
-        if (verboseCompilationEnabled())
-            dataLog(&quot;Compiling block &quot;, *block, &quot;\n&quot;);
-        
-        m_highBlock = block;
-        
-        // Make sure that any blocks created while lowering code in the high block have the frequency of
-        // the high block. This is appropriate because B3 doesn't need precise frequencies. It just needs
-        // something roughly approximate for things like register allocation.
-        m_out.setFrequency(m_highBlock-&gt;executionCount);
-        
-        LBasicBlock lowBlock = m_blocks.get(m_highBlock);
-        
-        m_nextHighBlock = 0;
-        for (BlockIndex nextBlockIndex = m_highBlock-&gt;index + 1; nextBlockIndex &lt; m_graph.numBlocks(); ++nextBlockIndex) {
-            m_nextHighBlock = m_graph.block(nextBlockIndex);
-            if (m_nextHighBlock)
-                break;
-        }
-        m_nextLowBlock = m_nextHighBlock ? m_blocks.get(m_nextHighBlock) : 0;
-        
-        // All of this effort to find the next block gives us the ability to keep the
-        // generated IR in roughly program order. This ought not affect the performance
-        // of the generated code (since we expect LLVM to reorder things) but it will
-        // make IR dumps easier to read.
-        m_out.appendTo(lowBlock, m_nextLowBlock);
-        
-        if (Options::ftlCrashes())
-            m_out.trap();
-        
-        if (!m_highBlock-&gt;cfaHasVisited) {
-            if (verboseCompilationEnabled())
-                dataLog(&quot;Bailing because CFA didn't reach.\n&quot;);
-            crash(m_highBlock-&gt;index, UINT_MAX);
-            return;
-        }
-        
-        m_availabilityCalculator.beginBlock(m_highBlock);
-        
-        m_state.reset();
-        m_state.beginBasicBlock(m_highBlock);
-        
-        for (m_nodeIndex = 0; m_nodeIndex &lt; m_highBlock-&gt;size(); ++m_nodeIndex) {
-            if (!compileNode(m_nodeIndex))
-                break;
-        }
-    }
-
-    void safelyInvalidateAfterTermination()
-    {
-        if (verboseCompilationEnabled())
-            dataLog(&quot;Bailing.\n&quot;);
-        crash();
-
-        // Invalidate dominated blocks. Under normal circumstances we would expect
-        // them to be invalidated already. But you can have the CFA become more
-        // precise over time because the structures of objects change on the main
-        // thread. Failing to do this would result in weird crashes due to a value
-        // being used but not defined. Race conditions FTW!
-        for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
-            DFG::BasicBlock* target = m_graph.block(blockIndex);
-            if (!target)
-                continue;
-            if (m_graph.m_dominators-&gt;dominates(m_highBlock, target)) {
-                if (verboseCompilationEnabled())
-                    dataLog(&quot;Block &quot;, *target, &quot; will bail also.\n&quot;);
-                target-&gt;cfaHasVisited = false;
-            }
-        }
-    }
-
-    bool compileNode(unsigned nodeIndex)
-    {
-        if (!m_state.isValid()) {
-            safelyInvalidateAfterTermination();
-            return false;
-        }
-        
-        m_node = m_highBlock-&gt;at(nodeIndex);
-        m_origin = m_node-&gt;origin;
-        m_out.setOrigin(m_node);
-        
-        if (verboseCompilationEnabled())
-            dataLog(&quot;Lowering &quot;, m_node, &quot;\n&quot;);
-        
-        m_availableRecoveries.resize(0);
-        
-        m_interpreter.startExecuting();
-        
-        switch (m_node-&gt;op()) {
-        case DFG::Upsilon:
-            compileUpsilon();
-            break;
-        case DFG::Phi:
-            compilePhi();
-            break;
-        case JSConstant:
-            break;
-        case DoubleConstant:
-            compileDoubleConstant();
-            break;
-        case Int52Constant:
-            compileInt52Constant();
-            break;
-        case DoubleRep:
-            compileDoubleRep();
-            break;
-        case DoubleAsInt32:
-            compileDoubleAsInt32();
-            break;
-        case DFG::ValueRep:
-            compileValueRep();
-            break;
-        case Int52Rep:
-            compileInt52Rep();
-            break;
-        case ValueToInt32:
-            compileValueToInt32();
-            break;
-        case BooleanToNumber:
-            compileBooleanToNumber();
-            break;
-        case ExtractOSREntryLocal:
-            compileExtractOSREntryLocal();
-            break;
-        case GetStack:
-            compileGetStack();
-            break;
-        case PutStack:
-            compilePutStack();
-            break;
-        case DFG::Check:
-            compileNoOp();
-            break;
-        case ToThis:
-            compileToThis();
-            break;
-        case ValueAdd:
-            compileValueAdd();
-            break;
-        case StrCat:
-            compileStrCat();
-            break;
-        case ArithAdd:
-        case ArithSub:
-            compileArithAddOrSub();
-            break;
-        case ArithClz32:
-            compileArithClz32();
-            break;
-        case ArithMul:
-            compileArithMul();
-            break;
-        case ArithDiv:
-            compileArithDiv();
-            break;
-        case ArithMod:
-            compileArithMod();
-            break;
-        case ArithMin:
-        case ArithMax:
-            compileArithMinOrMax();
-            break;
-        case ArithAbs:
-            compileArithAbs();
-            break;
-        case ArithSin:
-            compileArithSin();
-            break;
-        case ArithCos:
-            compileArithCos();
-            break;
-        case ArithPow:
-            compileArithPow();
-            break;
-        case ArithRandom:
-            compileArithRandom();
-            break;
-        case ArithRound:
-            compileArithRound();
-            break;
-        case ArithSqrt:
-            compileArithSqrt();
-            break;
-        case ArithLog:
-            compileArithLog();
-            break;
-        case ArithFRound:
-            compileArithFRound();
-            break;
-        case ArithNegate:
-            compileArithNegate();
-            break;
-        case DFG::BitAnd:
-            compileBitAnd();
-            break;
-        case DFG::BitOr:
-            compileBitOr();
-            break;
-        case DFG::BitXor:
-            compileBitXor();
-            break;
-        case BitRShift:
-            compileBitRShift();
-            break;
-        case BitLShift:
-            compileBitLShift();
-            break;
-        case BitURShift:
-            compileBitURShift();
-            break;
-        case UInt32ToNumber:
-            compileUInt32ToNumber();
-            break;
-        case CheckStructure:
-            compileCheckStructure();
-            break;
-        case CheckCell:
-            compileCheckCell();
-            break;
-        case CheckNotEmpty:
-            compileCheckNotEmpty();
-            break;
-        case CheckBadCell:
-            compileCheckBadCell();
-            break;
-        case CheckIdent:
-            compileCheckIdent();
-            break;
-        case GetExecutable:
-            compileGetExecutable();
-            break;
-        case ArrayifyToStructure:
-            compileArrayifyToStructure();
-            break;
-        case PutStructure:
-            compilePutStructure();
-            break;
-        case GetById:
-        case GetByIdFlush:
-            compileGetById();
-            break;
-        case In:
-            compileIn();
-            break;
-        case PutById:
-        case PutByIdDirect:
-        case PutByIdFlush:
-            compilePutById();
-            break;
-        case PutGetterById:
-        case PutSetterById:
-            compilePutAccessorById();
-            break;
-        case PutGetterSetterById:
-            compilePutGetterSetterById();
-            break;
-        case PutGetterByVal:
-        case PutSetterByVal:
-            compilePutAccessorByVal();
-            break;
-        case GetButterfly:
-            compileGetButterfly();
-            break;
-        case GetButterflyReadOnly:
-            compileGetButterflyReadOnly();
-            break;
-        case ConstantStoragePointer:
-            compileConstantStoragePointer();
-            break;
-        case GetIndexedPropertyStorage:
-            compileGetIndexedPropertyStorage();
-            break;
-        case CheckArray:
-            compileCheckArray();
-            break;
-        case GetArrayLength:
-            compileGetArrayLength();
-            break;
-        case CheckInBounds:
-            compileCheckInBounds();
-            break;
-        case GetByVal:
-            compileGetByVal();
-            break;
-        case GetMyArgumentByVal:
-            compileGetMyArgumentByVal();
-            break;
-        case PutByVal:
-        case PutByValAlias:
-        case PutByValDirect:
-            compilePutByVal();
-            break;
-        case ArrayPush:
-            compileArrayPush();
-            break;
-        case ArrayPop:
-            compileArrayPop();
-            break;
-        case CreateActivation:
-            compileCreateActivation();
-            break;
-        case NewFunction:
-        case NewArrowFunction:
-        case NewGeneratorFunction:
-            compileNewFunction();
-            break;
-        case CreateDirectArguments:
-            compileCreateDirectArguments();
-            break;
-        case CreateScopedArguments:
-            compileCreateScopedArguments();
-            break;
-        case CreateClonedArguments:
-            compileCreateClonedArguments();
-            break;
-        case NewObject:
-            compileNewObject();
-            break;
-        case NewArray:
-            compileNewArray();
-            break;
-        case NewArrayBuffer:
-            compileNewArrayBuffer();
-            break;
-        case NewArrayWithSize:
-            compileNewArrayWithSize();
-            break;
-        case NewTypedArray:
-            compileNewTypedArray();
-            break;
-        case GetTypedArrayByteOffset:
-            compileGetTypedArrayByteOffset();
-            break;
-        case AllocatePropertyStorage:
-            compileAllocatePropertyStorage();
-            break;
-        case ReallocatePropertyStorage:
-            compileReallocatePropertyStorage();
-            break;
-        case ToString:
-        case CallStringConstructor:
-            compileToStringOrCallStringConstructor();
-            break;
-        case ToPrimitive:
-            compileToPrimitive();
-            break;
-        case MakeRope:
-            compileMakeRope();
-            break;
-        case StringCharAt:
-            compileStringCharAt();
-            break;
-        case StringCharCodeAt:
-            compileStringCharCodeAt();
-            break;
-        case StringFromCharCode:
-            compileStringFromCharCode();
-            break;
-        case GetByOffset:
-        case GetGetterSetterByOffset:
-            compileGetByOffset();
-            break;
-        case GetGetter:
-            compileGetGetter();
-            break;
-        case GetSetter:
-            compileGetSetter();
-            break;
-        case MultiGetByOffset:
-            compileMultiGetByOffset();
-            break;
-        case PutByOffset:
-            compilePutByOffset();
-            break;
-        case MultiPutByOffset:
-            compileMultiPutByOffset();
-            break;
-        case GetGlobalVar:
-        case GetGlobalLexicalVariable:
-            compileGetGlobalVariable();
-            break;
-        case PutGlobalVariable:
-            compilePutGlobalVariable();
-            break;
-        case NotifyWrite:
-            compileNotifyWrite();
-            break;
-        case GetCallee:
-            compileGetCallee();
-            break;
-        case GetArgumentCount:
-            compileGetArgumentCount();
-            break;
-        case GetScope:
-            compileGetScope();
-            break;
-        case SkipScope:
-            compileSkipScope();
-            break;
-        case GetClosureVar:
-            compileGetClosureVar();
-            break;
-        case PutClosureVar:
-            compilePutClosureVar();
-            break;
-        case GetFromArguments:
-            compileGetFromArguments();
-            break;
-        case PutToArguments:
-            compilePutToArguments();
-            break;
-        case CompareEq:
-            compileCompareEq();
-            break;
-        case CompareStrictEq:
-            compileCompareStrictEq();
-            break;
-        case CompareLess:
-            compileCompareLess();
-            break;
-        case CompareLessEq:
-            compileCompareLessEq();
-            break;
-        case CompareGreater:
-            compileCompareGreater();
-            break;
-        case CompareGreaterEq:
-            compileCompareGreaterEq();
-            break;
-        case LogicalNot:
-            compileLogicalNot();
-            break;
-        case Call:
-        case TailCallInlinedCaller:
-        case Construct:
-            compileCallOrConstruct();
-            break;
-        case TailCall:
-            compileTailCall();
-            break;
-        case CallVarargs:
-        case CallForwardVarargs:
-        case TailCallVarargs:
-        case TailCallVarargsInlinedCaller:
-        case TailCallForwardVarargs:
-        case TailCallForwardVarargsInlinedCaller:
-        case ConstructVarargs:
-        case ConstructForwardVarargs:
-            compileCallOrConstructVarargs();
-            break;
-        case LoadVarargs:
-            compileLoadVarargs();
-            break;
-        case ForwardVarargs:
-            compileForwardVarargs();
-            break;
-        case DFG::Jump:
-            compileJump();
-            break;
-        case DFG::Branch:
-            compileBranch();
-            break;
-        case DFG::Switch:
-            compileSwitch();
-            break;
-        case DFG::Return:
-            compileReturn();
-            break;
-        case ForceOSRExit:
-            compileForceOSRExit();
-            break;
-        case Throw:
-        case ThrowReferenceError:
-            compileThrow();
-            break;
-        case InvalidationPoint:
-            compileInvalidationPoint();
-            break;
-        case IsUndefined:
-            compileIsUndefined();
-            break;
-        case IsBoolean:
-            compileIsBoolean();
-            break;
-        case IsNumber:
-            compileIsNumber();
-            break;
-        case IsString:
-            compileIsString();
-            break;
-        case IsObject:
-            compileIsObject();
-            break;
-        case IsObjectOrNull:
-            compileIsObjectOrNull();
-            break;
-        case IsFunction:
-            compileIsFunction();
-            break;
-        case TypeOf:
-            compileTypeOf();
-            break;
-        case CheckTypeInfoFlags:
-            compileCheckTypeInfoFlags();
-            break;
-        case OverridesHasInstance:
-            compileOverridesHasInstance();
-            break;
-        case InstanceOf:
-            compileInstanceOf();
-            break;
-        case InstanceOfCustom:
-            compileInstanceOfCustom();
-            break;
-        case CountExecution:
-            compileCountExecution();
-            break;
-        case StoreBarrier:
-            compileStoreBarrier();
-            break;
-        case HasIndexedProperty:
-            compileHasIndexedProperty();
-            break;
-        case HasGenericProperty:
-            compileHasGenericProperty();
-            break;
-        case HasStructureProperty:
-            compileHasStructureProperty();
-            break;
-        case GetDirectPname:
-            compileGetDirectPname();
-            break;
-        case GetEnumerableLength:
-            compileGetEnumerableLength();
-            break;
-        case GetPropertyEnumerator:
-            compileGetPropertyEnumerator();
-            break;
-        case GetEnumeratorStructurePname:
-            compileGetEnumeratorStructurePname();
-            break;
-        case GetEnumeratorGenericPname:
-            compileGetEnumeratorGenericPname();
-            break;
-        case ToIndexString:
-            compileToIndexString();
-            break;
-        case CheckStructureImmediate:
-            compileCheckStructureImmediate();
-            break;
-        case MaterializeNewObject:
-            compileMaterializeNewObject();
-            break;
-        case MaterializeCreateActivation:
-            compileMaterializeCreateActivation();
-            break;
-        case CheckWatchdogTimer:
-            compileCheckWatchdogTimer();
-            break;
-        case CopyRest:
-            compileCopyRest();
-            break;
-        case GetRestLength:
-            compileGetRestLength();
-            break;
-
-        case PhantomLocal:
-        case LoopHint:
-        case MovHint:
-        case ZombieHint:
-        case ExitOK:
-        case PhantomNewObject:
-        case PhantomNewFunction:
-        case PhantomNewGeneratorFunction:
-        case PhantomCreateActivation:
-        case PhantomDirectArguments:
-        case PhantomClonedArguments:
-        case PutHint:
-        case BottomValue:
-        case KillStack:
-            break;
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Unrecognized node in FTL backend&quot;);
-            break;
-        }
-        
-        if (m_node-&gt;isTerminal())
-            return false;
-        
-        if (!m_state.isValid()) {
-            safelyInvalidateAfterTermination();
-            return false;
-        }
-
-        m_availabilityCalculator.executeNode(m_node);
-        m_interpreter.executeEffects(nodeIndex);
-        
-        return true;
-    }
-
-    void compileUpsilon()
-    {
-        LValue upsilonValue = nullptr;
-        switch (m_node-&gt;child1().useKind()) {
-        case DoubleRepUse:
-            upsilonValue = lowDouble(m_node-&gt;child1());
-            break;
-        case Int32Use:
-        case KnownInt32Use:
-            upsilonValue = lowInt32(m_node-&gt;child1());
-            break;
-        case Int52RepUse:
-            upsilonValue = lowInt52(m_node-&gt;child1());
-            break;
-        case BooleanUse:
-        case KnownBooleanUse:
-            upsilonValue = lowBoolean(m_node-&gt;child1());
-            break;
-        case CellUse:
-        case KnownCellUse:
-            upsilonValue = lowCell(m_node-&gt;child1());
-            break;
-        case UntypedUse:
-            upsilonValue = lowJSValue(m_node-&gt;child1());
-            break;
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-        ValueFromBlock upsilon = m_out.anchor(upsilonValue);
-        LValue phiNode = m_phis.get(m_node-&gt;phi());
-        m_out.addIncomingToPhi(phiNode, upsilon);
-    }
-    
-    void compilePhi()
-    {
-        LValue phi = m_phis.get(m_node);
-        m_out.m_block-&gt;append(phi);
-
-        switch (m_node-&gt;flags() &amp; NodeResultMask) {
-        case NodeResultDouble:
-            setDouble(phi);
-            break;
-        case NodeResultInt32:
-            setInt32(phi);
-            break;
-        case NodeResultInt52:
-            setInt52(phi);
-            break;
-        case NodeResultBoolean:
-            setBoolean(phi);
-            break;
-        case NodeResultJS:
-            setJSValue(phi);
-            break;
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-    
-    void compileDoubleConstant()
-    {
-        setDouble(m_out.constDouble(m_node-&gt;asNumber()));
-    }
-    
-    void compileInt52Constant()
-    {
-        int64_t value = m_node-&gt;asMachineInt();
-        
-        setInt52(m_out.constInt64(value &lt;&lt; JSValue::int52ShiftAmount));
-        setStrictInt52(m_out.constInt64(value));
-    }
-
-    void compileDoubleRep()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case RealNumberUse: {
-            LValue value = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
-            
-            LValue doubleValue = unboxDouble(value);
-            
-            LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;DoubleRep RealNumberUse int case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;DoubleRep continuation&quot;));
-            
-            ValueFromBlock fastResult = m_out.anchor(doubleValue);
-            m_out.branch(
-                m_out.doubleEqual(doubleValue, doubleValue),
-                usually(continuation), rarely(intCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(intCase, continuation);
-            
-            FTL_TYPE_CHECK(
-                jsValueValue(value), m_node-&gt;child1(), SpecBytecodeRealNumber,
-                isNotInt32(value, provenType(m_node-&gt;child1()) &amp; ~SpecFullDouble));
-            ValueFromBlock slowResult = m_out.anchor(m_out.intToDouble(unboxInt32(value)));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            
-            setDouble(m_out.phi(m_out.doubleType, fastResult, slowResult));
-            return;
-        }
-            
-        case NotCellUse:
-        case NumberUse: {
-            bool shouldConvertNonNumber = m_node-&gt;child1().useKind() == NotCellUse;
-            
-            LValue value = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
-
-            LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble unboxing int case&quot;));
-            LBasicBlock doubleTesting = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing double case&quot;));
-            LBasicBlock doubleCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble unboxing double case&quot;));
-            LBasicBlock nonDoubleCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing undefined case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble unboxing continuation&quot;));
-            
-            m_out.branch(
-                isNotInt32(value, provenType(m_node-&gt;child1())),
-                unsure(doubleTesting), unsure(intCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(intCase, doubleTesting);
-            
-            ValueFromBlock intToDouble = m_out.anchor(
-                m_out.intToDouble(unboxInt32(value)));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(doubleTesting, doubleCase);
-            LValue valueIsNumber = isNumber(value, provenType(m_node-&gt;child1()));
-            m_out.branch(valueIsNumber, usually(doubleCase), rarely(nonDoubleCase));
-
-            m_out.appendTo(doubleCase, nonDoubleCase);
-            ValueFromBlock unboxedDouble = m_out.anchor(unboxDouble(value));
-            m_out.jump(continuation);
-
-            if (shouldConvertNonNumber) {
-                LBasicBlock undefinedCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble converting undefined case&quot;));
-                LBasicBlock testNullCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing null case&quot;));
-                LBasicBlock nullCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble converting null case&quot;));
-                LBasicBlock testBooleanTrueCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble testing boolean true case&quot;));
-                LBasicBlock convertBooleanTrueCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble convert boolean true case&quot;));
-                LBasicBlock convertBooleanFalseCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToDouble convert boolean false case&quot;));
-
-                m_out.appendTo(nonDoubleCase, undefinedCase);
-                LValue valueIsUndefined = m_out.equal(value, m_out.constInt64(ValueUndefined));
-                m_out.branch(valueIsUndefined, unsure(undefinedCase), unsure(testNullCase));
-
-                m_out.appendTo(undefinedCase, testNullCase);
-                ValueFromBlock convertedUndefined = m_out.anchor(m_out.constDouble(PNaN));
-                m_out.jump(continuation);
-
-                m_out.appendTo(testNullCase, nullCase);
-                LValue valueIsNull = m_out.equal(value, m_out.constInt64(ValueNull));
-                m_out.branch(valueIsNull, unsure(nullCase), unsure(testBooleanTrueCase));
-
-                m_out.appendTo(nullCase, testBooleanTrueCase);
-                ValueFromBlock convertedNull = m_out.anchor(m_out.constDouble(0));
-                m_out.jump(continuation);
-
-                m_out.appendTo(testBooleanTrueCase, convertBooleanTrueCase);
-                LValue valueIsBooleanTrue = m_out.equal(value, m_out.constInt64(ValueTrue));
-                m_out.branch(valueIsBooleanTrue, unsure(convertBooleanTrueCase), unsure(convertBooleanFalseCase));
-
-                m_out.appendTo(convertBooleanTrueCase, convertBooleanFalseCase);
-                ValueFromBlock convertedTrue = m_out.anchor(m_out.constDouble(1));
-                m_out.jump(continuation);
-
-                m_out.appendTo(convertBooleanFalseCase, continuation);
-
-                LValue valueIsNotBooleanFalse = m_out.notEqual(value, m_out.constInt64(ValueFalse));
-                FTL_TYPE_CHECK(jsValueValue(value), m_node-&gt;child1(), ~SpecCell, valueIsNotBooleanFalse);
-                ValueFromBlock convertedFalse = m_out.anchor(m_out.constDouble(0));
-                m_out.jump(continuation);
-
-                m_out.appendTo(continuation, lastNext);
-                setDouble(m_out.phi(m_out.doubleType, intToDouble, unboxedDouble, convertedUndefined, convertedNull, convertedTrue, convertedFalse));
-                return;
-            }
-            m_out.appendTo(nonDoubleCase, continuation);
-            FTL_TYPE_CHECK(jsValueValue(value), m_node-&gt;child1(), SpecBytecodeNumber, m_out.booleanTrue);
-            m_out.unreachable();
-
-            m_out.appendTo(continuation, lastNext);
-
-            setDouble(m_out.phi(m_out.doubleType, intToDouble, unboxedDouble));
-            return;
-        }
-            
-        case Int52RepUse: {
-            setDouble(strictInt52ToDouble(lowStrictInt52(m_node-&gt;child1())));
-            return;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-        }
-    }
-
-    void compileDoubleAsInt32()
-    {
-        LValue integerValue = convertDoubleToInt32(lowDouble(m_node-&gt;child1()), shouldCheckNegativeZero(m_node-&gt;arithMode()));
-        setInt32(integerValue);
-    }
-
-    void compileValueRep()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case DoubleRepUse: {
-            LValue value = lowDouble(m_node-&gt;child1());
-            
-            if (m_interpreter.needsTypeCheck(m_node-&gt;child1(), ~SpecDoubleImpureNaN)) {
-                value = m_out.select(
-                    m_out.doubleEqual(value, value), value, m_out.constDouble(PNaN));
-            }
-            
-            setJSValue(boxDouble(value));
-            return;
-        }
-            
-        case Int52RepUse: {
-            setJSValue(strictInt52ToJSValue(lowStrictInt52(m_node-&gt;child1())));
-            return;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-        }
-    }
-    
-    void compileInt52Rep()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case Int32Use:
-            setStrictInt52(m_out.signExt32To64(lowInt32(m_node-&gt;child1())));
-            return;
-            
-        case MachineIntUse:
-            setStrictInt52(
-                jsValueToStrictInt52(
-                    m_node-&gt;child1(), lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation)));
-            return;
-            
-        case DoubleRepMachineIntUse:
-            setStrictInt52(
-                doubleToStrictInt52(
-                    m_node-&gt;child1(), lowDouble(m_node-&gt;child1())));
-            return;
-            
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-        }
-    }
-    
-    void compileValueToInt32()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case Int52RepUse:
-            setInt32(m_out.castToInt32(lowStrictInt52(m_node-&gt;child1())));
-            break;
-            
-        case DoubleRepUse:
-            setInt32(doubleToInt32(lowDouble(m_node-&gt;child1())));
-            break;
-            
-        case NumberUse:
-        case NotCellUse: {
-            LoweredNodeValue value = m_int32Values.get(m_node-&gt;child1().node());
-            if (isValid(value)) {
-                setInt32(value.value());
-                break;
-            }
-            
-            value = m_jsValueValues.get(m_node-&gt;child1().node());
-            if (isValid(value)) {
-                setInt32(numberOrNotCellToInt32(m_node-&gt;child1(), value.value()));
-                break;
-            }
-            
-            // We'll basically just get here for constants. But it's good to have this
-            // catch-all since we often add new representations into the mix.
-            setInt32(
-                numberOrNotCellToInt32(
-                    m_node-&gt;child1(),
-                    lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation)));
-            break;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-    
-    void compileBooleanToNumber()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case BooleanUse: {
-            setInt32(m_out.zeroExt(lowBoolean(m_node-&gt;child1()), m_out.int32));
-            return;
-        }
-            
-        case UntypedUse: {
-            LValue value = lowJSValue(m_node-&gt;child1());
-            
-            if (!m_interpreter.needsTypeCheck(m_node-&gt;child1(), SpecBoolInt32 | SpecBoolean)) {
-                setInt32(m_out.bitAnd(m_out.castToInt32(value), m_out.int32One));
-                return;
-            }
-            
-            LBasicBlock booleanCase = FTL_NEW_BLOCK(m_out, (&quot;BooleanToNumber boolean case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;BooleanToNumber continuation&quot;));
-            
-            ValueFromBlock notBooleanResult = m_out.anchor(value);
-            m_out.branch(
-                isBoolean(value, provenType(m_node-&gt;child1())),
-                unsure(booleanCase), unsure(continuation));
-            
-            LBasicBlock lastNext = m_out.appendTo(booleanCase, continuation);
-            ValueFromBlock booleanResult = m_out.anchor(m_out.bitOr(
-                m_out.zeroExt(unboxBoolean(value), m_out.int64), m_tagTypeNumber));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, booleanResult, notBooleanResult));
-            return;
-        }
-            
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-            return;
-        }
-    }
-
-    void compileExtractOSREntryLocal()
-    {
-        EncodedJSValue* buffer = static_cast&lt;EncodedJSValue*&gt;(
-            m_ftlState.jitCode-&gt;ftlForOSREntry()-&gt;entryBuffer()-&gt;dataBuffer());
-        setJSValue(m_out.load64(m_out.absolute(buffer + m_node-&gt;unlinkedLocal().toLocal())));
-    }
-    
-    void compileGetStack()
-    {
-        // GetLocals arise only for captured variables and arguments. For arguments, we might have
-        // already loaded it.
-        if (LValue value = m_loadedArgumentValues.get(m_node)) {
-            setJSValue(value);
-            return;
-        }
-        
-        StackAccessData* data = m_node-&gt;stackAccessData();
-        AbstractValue&amp; value = m_state.variables().operand(data-&gt;local);
-        
-        DFG_ASSERT(m_graph, m_node, isConcrete(data-&gt;format));
-        DFG_ASSERT(m_graph, m_node, data-&gt;format != FlushedDouble); // This just happens to not arise for GetStacks, right now. It would be trivial to support.
-        
-        if (isInt32Speculation(value.m_type))
-            setInt32(m_out.load32(payloadFor(data-&gt;machineLocal)));
-        else
-            setJSValue(m_out.load64(addressFor(data-&gt;machineLocal)));
-    }
-    
-    void compilePutStack()
-    {
-        StackAccessData* data = m_node-&gt;stackAccessData();
-        switch (data-&gt;format) {
-        case FlushedJSValue: {
-            LValue value = lowJSValue(m_node-&gt;child1());
-            m_out.store64(value, addressFor(data-&gt;machineLocal));
-            break;
-        }
-            
-        case FlushedDouble: {
-            LValue value = lowDouble(m_node-&gt;child1());
-            m_out.storeDouble(value, addressFor(data-&gt;machineLocal));
-            break;
-        }
-            
-        case FlushedInt32: {
-            LValue value = lowInt32(m_node-&gt;child1());
-            m_out.store32(value, payloadFor(data-&gt;machineLocal));
-            break;
-        }
-            
-        case FlushedInt52: {
-            LValue value = lowInt52(m_node-&gt;child1());
-            m_out.store64(value, addressFor(data-&gt;machineLocal));
-            break;
-        }
-            
-        case FlushedCell: {
-            LValue value = lowCell(m_node-&gt;child1());
-            m_out.store64(value, addressFor(data-&gt;machineLocal));
-            break;
-        }
-            
-        case FlushedBoolean: {
-            speculateBoolean(m_node-&gt;child1());
-            m_out.store64(
-                lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation),
-                addressFor(data-&gt;machineLocal));
-            break;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad flush format&quot;);
-            break;
-        }
-    }
-    
-    void compileNoOp()
-    {
-        DFG_NODE_DO_TO_CHILDREN(m_graph, m_node, speculate);
-    }
-    
-    void compileToThis()
-    {
-        LValue value = lowJSValue(m_node-&gt;child1());
-        
-        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;ToThis is cell case&quot;));
-        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ToThis slow case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToThis continuation&quot;));
-        
-        m_out.branch(
-            isCell(value, provenType(m_node-&gt;child1())), usually(isCellCase), rarely(slowCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(isCellCase, slowCase);
-        ValueFromBlock fastResult = m_out.anchor(value);
-        m_out.branch(isType(value, FinalObjectType), usually(continuation), rarely(slowCase));
-        
-        m_out.appendTo(slowCase, continuation);
-        J_JITOperation_EJ function;
-        if (m_graph.isStrictModeFor(m_node-&gt;origin.semantic))
-            function = operationToThisStrict;
-        else
-            function = operationToThis;
-        ValueFromBlock slowResult = m_out.anchor(
-            vmCall(m_out.int64, m_out.operation(function), m_callFrame, value));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
-    }
-    
-    void compileValueAdd()
-    {
-        emitBinarySnippet&lt;JITAddGenerator&gt;(operationValueAdd);
-    }
-    
-    void compileStrCat()
-    {
-        LValue result;
-        if (m_node-&gt;child3()) {
-            result = vmCall(
-                m_out.int64, m_out.operation(operationStrCat3), m_callFrame,
-                lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation),
-                lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation),
-                lowJSValue(m_node-&gt;child3(), ManualOperandSpeculation));
-        } else {
-            result = vmCall(
-                m_out.int64, m_out.operation(operationStrCat2), m_callFrame,
-                lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation),
-                lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation));
-        }
-        setJSValue(result);
-    }
-    
-    void compileArithAddOrSub()
-    {
-        bool isSub =  m_node-&gt;op() == ArithSub;
-        switch (m_node-&gt;binaryUseKind()) {
-        case Int32Use: {
-            LValue left = lowInt32(m_node-&gt;child1());
-            LValue right = lowInt32(m_node-&gt;child2());
-
-            if (!shouldCheckOverflow(m_node-&gt;arithMode())) {
-                setInt32(isSub ? m_out.sub(left, right) : m_out.add(left, right));
-                break;
-            }
-
-            CheckValue* result =
-                isSub ? m_out.speculateSub(left, right) : m_out.speculateAdd(left, right);
-            blessSpeculation(result, Overflow, noValue(), nullptr, m_origin);
-            setInt32(result);
-            break;
-        }
-            
-        case Int52RepUse: {
-            if (!abstractValue(m_node-&gt;child1()).couldBeType(SpecInt52)
-                &amp;&amp; !abstractValue(m_node-&gt;child2()).couldBeType(SpecInt52)) {
-                Int52Kind kind;
-                LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
-                LValue right = lowInt52(m_node-&gt;child2(), kind);
-                setInt52(isSub ? m_out.sub(left, right) : m_out.add(left, right), kind);
-                break;
-            }
-
-            LValue left = lowInt52(m_node-&gt;child1());
-            LValue right = lowInt52(m_node-&gt;child2());
-            CheckValue* result =
-                isSub ? m_out.speculateSub(left, right) : m_out.speculateAdd(left, right);
-            blessSpeculation(result, Overflow, noValue(), nullptr, m_origin);
-            setInt52(result);
-            break;
-        }
-            
-        case DoubleRepUse: {
-            LValue C1 = lowDouble(m_node-&gt;child1());
-            LValue C2 = lowDouble(m_node-&gt;child2());
-
-            setDouble(isSub ? m_out.doubleSub(C1, C2) : m_out.doubleAdd(C1, C2));
-            break;
-        }
-
-        case UntypedUse: {
-            if (!isSub) {
-                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-                break;
-            }
-
-            emitBinarySnippet&lt;JITSubGenerator&gt;(operationValueSub);
-            break;
-        }
-
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-
-    void compileArithClz32()
-    {
-        LValue operand = lowInt32(m_node-&gt;child1());
-        setInt32(m_out.ctlz32(operand));
-    }
-    
-    void compileArithMul()
-    {
-        switch (m_node-&gt;binaryUseKind()) {
-        case Int32Use: {
-            LValue left = lowInt32(m_node-&gt;child1());
-            LValue right = lowInt32(m_node-&gt;child2());
-            
-            LValue result;
-
-            if (!shouldCheckOverflow(m_node-&gt;arithMode()))
-                result = m_out.mul(left, right);
-            else {
-                CheckValue* speculation = m_out.speculateMul(left, right);
-                blessSpeculation(speculation, Overflow, noValue(), nullptr, m_origin);
-                result = speculation;
-            }
-            
-            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
-                LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ArithMul slow case&quot;));
-                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMul continuation&quot;));
-                
-                m_out.branch(
-                    m_out.notZero32(result), usually(continuation), rarely(slowCase));
-                
-                LBasicBlock lastNext = m_out.appendTo(slowCase, continuation);
-                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(left, m_out.int32Zero));
-                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(right, m_out.int32Zero));
-                m_out.jump(continuation);
-                m_out.appendTo(continuation, lastNext);
-            }
-            
-            setInt32(result);
-            break;
-        }
-            
-        case Int52RepUse: {
-            Int52Kind kind;
-            LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
-            LValue right = lowInt52(m_node-&gt;child2(), opposite(kind));
-
-            CheckValue* result = m_out.speculateMul(left, right);
-            blessSpeculation(result, Overflow, noValue(), nullptr, m_origin);
-
-            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
-                LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ArithMul slow case&quot;));
-                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMul continuation&quot;));
-                
-                m_out.branch(
-                    m_out.notZero64(result), usually(continuation), rarely(slowCase));
-                
-                LBasicBlock lastNext = m_out.appendTo(slowCase, continuation);
-                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(left, m_out.int64Zero));
-                speculate(NegativeZero, noValue(), nullptr, m_out.lessThan(right, m_out.int64Zero));
-                m_out.jump(continuation);
-                m_out.appendTo(continuation, lastNext);
-            }
-            
-            setInt52(result);
-            break;
-        }
-            
-        case DoubleRepUse: {
-            setDouble(
-                m_out.doubleMul(lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
-            break;
-        }
-
-        case UntypedUse: {
-            emitBinarySnippet&lt;JITMulGenerator&gt;(operationValueMul);
-            break;
-        }
-
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-
-    void compileArithDiv()
-    {
-        switch (m_node-&gt;binaryUseKind()) {
-        case Int32Use: {
-            LValue numerator = lowInt32(m_node-&gt;child1());
-            LValue denominator = lowInt32(m_node-&gt;child2());
-
-            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
-                LBasicBlock zeroNumerator = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv zero numerator&quot;));
-                LBasicBlock numeratorContinuation = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv numerator continuation&quot;));
-
-                m_out.branch(
-                    m_out.isZero32(numerator),
-                    rarely(zeroNumerator), usually(numeratorContinuation));
-
-                LBasicBlock innerLastNext = m_out.appendTo(zeroNumerator, numeratorContinuation);
-
-                speculate(
-                    NegativeZero, noValue(), 0, m_out.lessThan(denominator, m_out.int32Zero));
-
-                m_out.jump(numeratorContinuation);
-
-                m_out.appendTo(numeratorContinuation, innerLastNext);
-            }
-            
-            if (shouldCheckOverflow(m_node-&gt;arithMode())) {
-                LBasicBlock unsafeDenominator = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv unsafe denominator&quot;));
-                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithDiv continuation&quot;));
-
-                LValue adjustedDenominator = m_out.add(denominator, m_out.int32One);
-                m_out.branch(
-                    m_out.above(adjustedDenominator, m_out.int32One),
-                    usually(continuation), rarely(unsafeDenominator));
-
-                LBasicBlock lastNext = m_out.appendTo(unsafeDenominator, continuation);
-                LValue neg2ToThe31 = m_out.constInt32(-2147483647-1);
-                speculate(Overflow, noValue(), nullptr, m_out.isZero32(denominator));
-                speculate(Overflow, noValue(), nullptr, m_out.equal(numerator, neg2ToThe31));
-                m_out.jump(continuation);
-
-                m_out.appendTo(continuation, lastNext);
-                LValue result = m_out.div(numerator, denominator);
-                speculate(
-                    Overflow, noValue(), 0,
-                    m_out.notEqual(m_out.mul(result, denominator), numerator));
-                setInt32(result);
-            } else
-                setInt32(m_out.chillDiv(numerator, denominator));
-
-            break;
-        }
-            
-        case DoubleRepUse: {
-            setDouble(m_out.doubleDiv(
-                lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
-            break;
-        }
-
-        case UntypedUse: {
-            emitBinarySnippet&lt;JITDivGenerator, NeedScratchFPR&gt;(operationValueDiv);
-            break;
-        }
-
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-    
-    void compileArithMod()
-    {
-        switch (m_node-&gt;binaryUseKind()) {
-        case Int32Use: {
-            LValue numerator = lowInt32(m_node-&gt;child1());
-            LValue denominator = lowInt32(m_node-&gt;child2());
-
-            LValue remainder;
-            if (shouldCheckOverflow(m_node-&gt;arithMode())) {
-                LBasicBlock unsafeDenominator = FTL_NEW_BLOCK(m_out, (&quot;ArithMod unsafe denominator&quot;));
-                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMod continuation&quot;));
-
-                LValue adjustedDenominator = m_out.add(denominator, m_out.int32One);
-                m_out.branch(
-                    m_out.above(adjustedDenominator, m_out.int32One),
-                    usually(continuation), rarely(unsafeDenominator));
-
-                LBasicBlock lastNext = m_out.appendTo(unsafeDenominator, continuation);
-                LValue neg2ToThe31 = m_out.constInt32(-2147483647-1);
-                speculate(Overflow, noValue(), nullptr, m_out.isZero32(denominator));
-                speculate(Overflow, noValue(), nullptr, m_out.equal(numerator, neg2ToThe31));
-                m_out.jump(continuation);
-
-                m_out.appendTo(continuation, lastNext);
-                LValue result = m_out.mod(numerator, denominator);
-                remainder = result;
-            } else
-                remainder = m_out.chillMod(numerator, denominator);
-
-            if (shouldCheckNegativeZero(m_node-&gt;arithMode())) {
-                LBasicBlock negativeNumerator = FTL_NEW_BLOCK(m_out, (&quot;ArithMod negative numerator&quot;));
-                LBasicBlock numeratorContinuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMod numerator continuation&quot;));
-
-                m_out.branch(
-                    m_out.lessThan(numerator, m_out.int32Zero),
-                    unsure(negativeNumerator), unsure(numeratorContinuation));
-
-                LBasicBlock innerLastNext = m_out.appendTo(negativeNumerator, numeratorContinuation);
-
-                speculate(NegativeZero, noValue(), 0, m_out.isZero32(remainder));
-
-                m_out.jump(numeratorContinuation);
-
-                m_out.appendTo(numeratorContinuation, innerLastNext);
-            }
-
-            setInt32(remainder);
-            break;
-        }
-            
-        case DoubleRepUse: {
-            setDouble(
-                m_out.doubleMod(lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
-            break;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-
-    void compileArithMinOrMax()
-    {
-        switch (m_node-&gt;binaryUseKind()) {
-        case Int32Use: {
-            LValue left = lowInt32(m_node-&gt;child1());
-            LValue right = lowInt32(m_node-&gt;child2());
-            
-            setInt32(
-                m_out.select(
-                    m_node-&gt;op() == ArithMin
-                        ? m_out.lessThan(left, right)
-                        : m_out.lessThan(right, left),
-                    left, right));
-            break;
-        }
-            
-        case DoubleRepUse: {
-            LValue left = lowDouble(m_node-&gt;child1());
-            LValue right = lowDouble(m_node-&gt;child2());
-            
-            LBasicBlock notLessThan = FTL_NEW_BLOCK(m_out, (&quot;ArithMin/ArithMax not less than&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithMin/ArithMax continuation&quot;));
-            
-            Vector&lt;ValueFromBlock, 2&gt; results;
-            
-            results.append(m_out.anchor(left));
-            m_out.branch(
-                m_node-&gt;op() == ArithMin
-                    ? m_out.doubleLessThan(left, right)
-                    : m_out.doubleGreaterThan(left, right),
-                unsure(continuation), unsure(notLessThan));
-            
-            LBasicBlock lastNext = m_out.appendTo(notLessThan, continuation);
-            results.append(m_out.anchor(m_out.select(
-                m_node-&gt;op() == ArithMin
-                    ? m_out.doubleGreaterThanOrEqual(left, right)
-                    : m_out.doubleLessThanOrEqual(left, right),
-                right, m_out.constDouble(PNaN))));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setDouble(m_out.phi(m_out.doubleType, results));
-            break;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-    
-    void compileArithAbs()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case Int32Use: {
-            LValue value = lowInt32(m_node-&gt;child1());
-
-            LValue mask = m_out.aShr(value, m_out.constInt32(31));
-            LValue result = m_out.bitXor(mask, m_out.add(mask, value));
-
-            if (shouldCheckOverflow(m_node-&gt;arithMode()))
-                speculate(Overflow, noValue(), 0, m_out.equal(result, m_out.constInt32(1 &lt;&lt; 31)));
-
-            setInt32(result);
-            break;
-        }
-            
-        case DoubleRepUse: {
-            setDouble(m_out.doubleAbs(lowDouble(m_node-&gt;child1())));
-            break;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-
-    void compileArithSin() { setDouble(m_out.doubleSin(lowDouble(m_node-&gt;child1()))); }
-
-    void compileArithCos() { setDouble(m_out.doubleCos(lowDouble(m_node-&gt;child1()))); }
-
-    void compileArithPow()
-    {
-        // FIXME: investigate llvm.powi to better understand its performance characteristics.
-        // It might be better to have the inline loop in DFG too.
-        if (m_node-&gt;child2().useKind() == Int32Use)
-            setDouble(m_out.doublePowi(lowDouble(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
-        else {
-            LValue base = lowDouble(m_node-&gt;child1());
-            LValue exponent = lowDouble(m_node-&gt;child2());
-
-            LBasicBlock integerExponentIsSmallBlock = FTL_NEW_BLOCK(m_out, (&quot;ArithPow test integer exponent is small.&quot;));
-            LBasicBlock integerExponentPowBlock = FTL_NEW_BLOCK(m_out, (&quot;ArithPow pow(double, (int)double).&quot;));
-            LBasicBlock doubleExponentPowBlockEntry = FTL_NEW_BLOCK(m_out, (&quot;ArithPow pow(double, double).&quot;));
-            LBasicBlock nanExceptionExponentIsInfinity = FTL_NEW_BLOCK(m_out, (&quot;ArithPow NaN Exception, check exponent is infinity.&quot;));
-            LBasicBlock nanExceptionBaseIsOne = FTL_NEW_BLOCK(m_out, (&quot;ArithPow NaN Exception, check base is one.&quot;));
-            LBasicBlock powBlock = FTL_NEW_BLOCK(m_out, (&quot;ArithPow regular pow&quot;));
-            LBasicBlock nanExceptionResultIsNaN = FTL_NEW_BLOCK(m_out, (&quot;ArithPow NaN Exception, result is NaN.&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithPow continuation&quot;));
-
-            LValue integerExponent = m_out.doubleToInt(exponent);
-            LValue integerExponentConvertedToDouble = m_out.intToDouble(integerExponent);
-            LValue exponentIsInteger = m_out.doubleEqual(exponent, integerExponentConvertedToDouble);
-            m_out.branch(exponentIsInteger, unsure(integerExponentIsSmallBlock), unsure(doubleExponentPowBlockEntry));
-
-            LBasicBlock lastNext = m_out.appendTo(integerExponentIsSmallBlock, integerExponentPowBlock);
-            LValue integerExponentBelow1000 = m_out.below(integerExponent, m_out.constInt32(1000));
-            m_out.branch(integerExponentBelow1000, usually(integerExponentPowBlock), rarely(doubleExponentPowBlockEntry));
-
-            m_out.appendTo(integerExponentPowBlock, doubleExponentPowBlockEntry);
-            ValueFromBlock powDoubleIntResult = m_out.anchor(m_out.doublePowi(base, integerExponent));
-            m_out.jump(continuation);
-
-            // If y is NaN, the result is NaN.
-            m_out.appendTo(doubleExponentPowBlockEntry, nanExceptionExponentIsInfinity);
-            LValue exponentIsNaN;
-            if (provenType(m_node-&gt;child2()) &amp; SpecDoubleNaN)
-                exponentIsNaN = m_out.doubleNotEqualOrUnordered(exponent, exponent);
-            else
-                exponentIsNaN = m_out.booleanFalse;
-            m_out.branch(exponentIsNaN, rarely(nanExceptionResultIsNaN), usually(nanExceptionExponentIsInfinity));
-
-            // If abs(x) is 1 and y is +infinity, the result is NaN.
-            // If abs(x) is 1 and y is -infinity, the result is NaN.
-            m_out.appendTo(nanExceptionExponentIsInfinity, nanExceptionBaseIsOne);
-            LValue absoluteExponent = m_out.doubleAbs(exponent);
-            LValue absoluteExponentIsInfinity = m_out.doubleEqual(absoluteExponent, m_out.constDouble(std::numeric_limits&lt;double&gt;::infinity()));
-            m_out.branch(absoluteExponentIsInfinity, rarely(nanExceptionBaseIsOne), usually(powBlock));
-
-            m_out.appendTo(nanExceptionBaseIsOne, powBlock);
-            LValue absoluteBase = m_out.doubleAbs(base);
-            LValue absoluteBaseIsOne = m_out.doubleEqual(absoluteBase, m_out.constDouble(1));
-            m_out.branch(absoluteBaseIsOne, unsure(nanExceptionResultIsNaN), unsure(powBlock));
-
-            m_out.appendTo(powBlock, nanExceptionResultIsNaN);
-            ValueFromBlock powResult = m_out.anchor(m_out.doublePow(base, exponent));
-            m_out.jump(continuation);
-
-            m_out.appendTo(nanExceptionResultIsNaN, continuation);
-            ValueFromBlock pureNan = m_out.anchor(m_out.constDouble(PNaN));
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            setDouble(m_out.phi(m_out.doubleType, powDoubleIntResult, powResult, pureNan));
-        }
-    }
-
-    void compileArithRandom()
-    {
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-
-        // Inlined WeakRandom::advance().
-        // uint64_t x = m_low;
-        void* lowAddress = reinterpret_cast&lt;uint8_t*&gt;(globalObject) + JSGlobalObject::weakRandomOffset() + WeakRandom::lowOffset();
-        LValue low = m_out.load64(m_out.absolute(lowAddress));
-        // uint64_t y = m_high;
-        void* highAddress = reinterpret_cast&lt;uint8_t*&gt;(globalObject) + JSGlobalObject::weakRandomOffset() + WeakRandom::highOffset();
-        LValue high = m_out.load64(m_out.absolute(highAddress));
-        // m_low = y;
-        m_out.store64(high, m_out.absolute(lowAddress));
-
-        // x ^= x &lt;&lt; 23;
-        LValue phase1 = m_out.bitXor(m_out.shl(low, m_out.constInt64(23)), low);
-
-        // x ^= x &gt;&gt; 17;
-        LValue phase2 = m_out.bitXor(m_out.lShr(phase1, m_out.constInt64(17)), phase1);
-
-        // x ^= y ^ (y &gt;&gt; 26);
-        LValue phase3 = m_out.bitXor(m_out.bitXor(high, m_out.lShr(high, m_out.constInt64(26))), phase2);
-
-        // m_high = x;
-        m_out.store64(phase3, m_out.absolute(highAddress));
-
-        // return x + y;
-        LValue random64 = m_out.add(phase3, high);
-
-        // Extract random 53bit. [0, 53] bit is safe integer number ranges in double representation.
-        LValue random53 = m_out.bitAnd(random64, m_out.constInt64((1ULL &lt;&lt; 53) - 1));
-
-        LValue double53Integer = m_out.intToDouble(random53);
-
-        // Convert `(53bit double integer value) / (1 &lt;&lt; 53)` to `(53bit double integer value) * (1.0 / (1 &lt;&lt; 53))`.
-        // In latter case, `1.0 / (1 &lt;&lt; 53)` will become a double value represented as (mantissa = 0 &amp; exp = 970, it means 1e-(2**54)).
-        static const double scale = 1.0 / (1ULL &lt;&lt; 53);
-
-        // Multiplying 1e-(2**54) with the double integer does not change anything of the mantissa part of the double integer.
-        // It just reduces the exp part of the given 53bit double integer.
-        // (Except for 0.0. This is specially handled and in this case, exp just becomes 0.)
-        // Now we get 53bit precision random double value in [0, 1).
-        LValue result = m_out.doubleMul(double53Integer, m_out.constDouble(scale));
-
-        setDouble(result);
-    }
-
-    void compileArithRound()
-    {
-        LBasicBlock realPartIsMoreThanHalf = FTL_NEW_BLOCK(m_out, (&quot;ArithRound should round down&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArithRound continuation&quot;));
-
-        LValue value = lowDouble(m_node-&gt;child1());
-        LValue integerValue = m_out.doubleCeil(value);
-        ValueFromBlock integerValueResult = m_out.anchor(integerValue);
-
-        LValue realPart = m_out.doubleSub(integerValue, value);
-
-        m_out.branch(m_out.doubleGreaterThanOrUnordered(realPart, m_out.constDouble(0.5)), unsure(realPartIsMoreThanHalf), unsure(continuation));
-
-        LBasicBlock lastNext = m_out.appendTo(realPartIsMoreThanHalf, continuation);
-        LValue integerValueRoundedDown = m_out.doubleSub(integerValue, m_out.constDouble(1));
-        ValueFromBlock integerValueRoundedDownResult = m_out.anchor(integerValueRoundedDown);
-        m_out.jump(continuation);
-        m_out.appendTo(continuation, lastNext);
-
-        LValue result = m_out.phi(m_out.doubleType, integerValueResult, integerValueRoundedDownResult);
-
-        if (producesInteger(m_node-&gt;arithRoundingMode())) {
-            LValue integerValue = convertDoubleToInt32(result, shouldCheckNegativeZero(m_node-&gt;arithRoundingMode()));
-            setInt32(integerValue);
-        } else
-            setDouble(result);
-    }
-
-    void compileArithSqrt() { setDouble(m_out.doubleSqrt(lowDouble(m_node-&gt;child1()))); }
-
-    void compileArithLog() { setDouble(m_out.doubleLog(lowDouble(m_node-&gt;child1()))); }
-    
-    void compileArithFRound()
-    {
-        setDouble(m_out.fround(lowDouble(m_node-&gt;child1())));
-    }
-    
-    void compileArithNegate()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case Int32Use: {
-            LValue value = lowInt32(m_node-&gt;child1());
-            
-            LValue result;
-            if (!shouldCheckOverflow(m_node-&gt;arithMode()))
-                result = m_out.neg(value);
-            else if (!shouldCheckNegativeZero(m_node-&gt;arithMode())) {
-                // &quot;0 - x&quot; is the canonical way of saying &quot;-x&quot; in both B3 and LLVM. This gets
-                // lowered to the right thing.
-                CheckValue* check = m_out.speculateSub(m_out.int32Zero, value);
-                blessSpeculation(check, Overflow, noValue(), nullptr, m_origin);
-                result = check;
-            } else {
-                speculate(Overflow, noValue(), 0, m_out.testIsZero32(value, m_out.constInt32(0x7fffffff)));
-                result = m_out.neg(value);
-            }
-
-            setInt32(result);
-            break;
-        }
-            
-        case Int52RepUse: {
-            if (!abstractValue(m_node-&gt;child1()).couldBeType(SpecInt52)) {
-                Int52Kind kind;
-                LValue value = lowWhicheverInt52(m_node-&gt;child1(), kind);
-                LValue result = m_out.neg(value);
-                if (shouldCheckNegativeZero(m_node-&gt;arithMode()))
-                    speculate(NegativeZero, noValue(), 0, m_out.isZero64(result));
-                setInt52(result, kind);
-                break;
-            }
-            
-            LValue value = lowInt52(m_node-&gt;child1());
-            CheckValue* result = m_out.speculateSub(m_out.int64Zero, value);
-            blessSpeculation(result, Int52Overflow, noValue(), nullptr, m_origin);
-            speculate(NegativeZero, noValue(), 0, m_out.isZero64(result));
-            setInt52(result);
-            break;
-        }
-            
-        case DoubleRepUse: {
-            setDouble(m_out.doubleNeg(lowDouble(m_node-&gt;child1())));
-            break;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-    
-    void compileBitAnd()
-    {
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            emitBinaryBitOpSnippet&lt;JITBitAndGenerator&gt;(operationValueBitAnd);
-            return;
-        }
-        setInt32(m_out.bitAnd(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
-    }
-    
-    void compileBitOr()
-    {
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            emitBinaryBitOpSnippet&lt;JITBitOrGenerator&gt;(operationValueBitOr);
-            return;
-        }
-        setInt32(m_out.bitOr(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
-    }
-    
-    void compileBitXor()
-    {
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            emitBinaryBitOpSnippet&lt;JITBitXorGenerator&gt;(operationValueBitXor);
-            return;
-        }
-        setInt32(m_out.bitXor(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
-    }
-    
-    void compileBitRShift()
-    {
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            emitRightShiftSnippet(JITRightShiftGenerator::SignedShift);
-            return;
-        }
-        setInt32(m_out.aShr(
-            lowInt32(m_node-&gt;child1()),
-            m_out.bitAnd(lowInt32(m_node-&gt;child2()), m_out.constInt32(31))));
-    }
-    
-    void compileBitLShift()
-    {
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            emitBinaryBitOpSnippet&lt;JITLeftShiftGenerator&gt;(operationValueBitLShift);
-            return;
-        }
-        setInt32(m_out.shl(
-            lowInt32(m_node-&gt;child1()),
-            m_out.bitAnd(lowInt32(m_node-&gt;child2()), m_out.constInt32(31))));
-    }
-    
-    void compileBitURShift()
-    {
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            emitRightShiftSnippet(JITRightShiftGenerator::UnsignedShift);
-            return;
-        }
-        setInt32(m_out.lShr(
-            lowInt32(m_node-&gt;child1()),
-            m_out.bitAnd(lowInt32(m_node-&gt;child2()), m_out.constInt32(31))));
-    }
-    
-    void compileUInt32ToNumber()
-    {
-        LValue value = lowInt32(m_node-&gt;child1());
-
-        if (doesOverflow(m_node-&gt;arithMode())) {
-            setDouble(m_out.unsignedToDouble(value));
-            return;
-        }
-        
-        speculate(Overflow, noValue(), 0, m_out.lessThan(value, m_out.int32Zero));
-        setInt32(value);
-    }
-    
-    void compileCheckStructure()
-    {
-        ExitKind exitKind;
-        if (m_node-&gt;child1()-&gt;hasConstant())
-            exitKind = BadConstantCache;
-        else
-            exitKind = BadCache;
-
-        switch (m_node-&gt;child1().useKind()) {
-        case CellUse:
-        case KnownCellUse: {
-            LValue cell = lowCell(m_node-&gt;child1());
-            
-            checkStructure(
-                m_out.load32(cell, m_heaps.JSCell_structureID), jsValueValue(cell),
-                exitKind, m_node-&gt;structureSet(),
-                [&amp;] (Structure* structure) {
-                    return weakStructureID(structure);
-                });
-            return;
-        }
-
-        case CellOrOtherUse: {
-            LValue value = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
-
-            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;CheckStructure CellOrOtherUse cell case&quot;));
-            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;CheckStructure CellOrOtherUse not cell case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CheckStructure CellOrOtherUse continuation&quot;));
-
-            m_out.branch(
-                isCell(value, provenType(m_node-&gt;child1())), unsure(cellCase), unsure(notCellCase));
-
-            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
-            checkStructure(
-                m_out.load32(value, m_heaps.JSCell_structureID), jsValueValue(value),
-                exitKind, m_node-&gt;structureSet(),
-                [&amp;] (Structure* structure) {
-                    return weakStructureID(structure);
-                });
-            m_out.jump(continuation);
-
-            m_out.appendTo(notCellCase, continuation);
-            FTL_TYPE_CHECK(jsValueValue(value), m_node-&gt;child1(), SpecCell | SpecOther, isNotOther(value));
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            return;
-        }
-
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            return;
-        }
-    }
-    
-    void compileCheckCell()
-    {
-        LValue cell = lowCell(m_node-&gt;child1());
-        
-        speculate(
-            BadCell, jsValueValue(cell), m_node-&gt;child1().node(),
-            m_out.notEqual(cell, weakPointer(m_node-&gt;cellOperand()-&gt;cell())));
-    }
-    
-    void compileCheckBadCell()
-    {
-        terminate(BadCell);
-    }
-
-    void compileCheckNotEmpty()
-    {
-        speculate(TDZFailure, noValue(), nullptr, m_out.isZero64(lowJSValue(m_node-&gt;child1())));
-    }
-
-    void compileCheckIdent()
-    {
-        UniquedStringImpl* uid = m_node-&gt;uidOperand();
-        if (uid-&gt;isSymbol()) {
-            LValue symbol = lowSymbol(m_node-&gt;child1());
-            LValue stringImpl = m_out.loadPtr(symbol, m_heaps.Symbol_privateName);
-            speculate(BadIdent, noValue(), nullptr, m_out.notEqual(stringImpl, m_out.constIntPtr(uid)));
-        } else {
-            LValue string = lowStringIdent(m_node-&gt;child1());
-            LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value);
-            speculate(BadIdent, noValue(), nullptr, m_out.notEqual(stringImpl, m_out.constIntPtr(uid)));
-        }
-    }
-
-    void compileGetExecutable()
-    {
-        LValue cell = lowCell(m_node-&gt;child1());
-        speculateFunction(m_node-&gt;child1(), cell);
-        setJSValue(m_out.loadPtr(cell, m_heaps.JSFunction_executable));
-    }
-    
-    void compileArrayifyToStructure()
-    {
-        LValue cell = lowCell(m_node-&gt;child1());
-        LValue property = !!m_node-&gt;child2() ? lowInt32(m_node-&gt;child2()) : 0;
-        
-        LBasicBlock unexpectedStructure = FTL_NEW_BLOCK(m_out, (&quot;ArrayifyToStructure unexpected structure&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArrayifyToStructure continuation&quot;));
-        
-        LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
-        
-        m_out.branch(
-            m_out.notEqual(structureID, weakStructureID(m_node-&gt;structure())),
-            rarely(unexpectedStructure), usually(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(unexpectedStructure, continuation);
-        
-        if (property) {
-            switch (m_node-&gt;arrayMode().type()) {
-            case Array::Int32:
-            case Array::Double:
-            case Array::Contiguous:
-                speculate(
-                    Uncountable, noValue(), 0,
-                    m_out.aboveOrEqual(property, m_out.constInt32(MIN_SPARSE_ARRAY_INDEX)));
-                break;
-            default:
-                break;
-            }
-        }
-        
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-            vmCall(m_out.voidType, m_out.operation(operationEnsureInt32), m_callFrame, cell);
-            break;
-        case Array::Double:
-            vmCall(m_out.voidType, m_out.operation(operationEnsureDouble), m_callFrame, cell);
-            break;
-        case Array::Contiguous:
-            vmCall(m_out.voidType, m_out.operation(operationEnsureContiguous), m_callFrame, cell);
-            break;
-        case Array::ArrayStorage:
-        case Array::SlowPutArrayStorage:
-            vmCall(m_out.voidType, m_out.operation(operationEnsureArrayStorage), m_callFrame, cell);
-            break;
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            break;
-        }
-        
-        structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
-        speculate(
-            BadIndexingType, jsValueValue(cell), 0,
-            m_out.notEqual(structureID, weakStructureID(m_node-&gt;structure())));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void compilePutStructure()
-    {
-        m_ftlState.jitCode-&gt;common.notifyCompilingStructureTransition(m_graph.m_plan, codeBlock(), m_node);
-
-        Structure* oldStructure = m_node-&gt;transition()-&gt;previous;
-        Structure* newStructure = m_node-&gt;transition()-&gt;next;
-        ASSERT_UNUSED(oldStructure, oldStructure-&gt;indexingType() == newStructure-&gt;indexingType());
-        ASSERT(oldStructure-&gt;typeInfo().inlineTypeFlags() == newStructure-&gt;typeInfo().inlineTypeFlags());
-        ASSERT(oldStructure-&gt;typeInfo().type() == newStructure-&gt;typeInfo().type());
-
-        LValue cell = lowCell(m_node-&gt;child1()); 
-        m_out.store32(
-            weakStructureID(newStructure),
-            cell, m_heaps.JSCell_structureID);
-    }
-    
-    void compileGetById()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case CellUse: {
-            setJSValue(getById(lowCell(m_node-&gt;child1())));
-            return;
-        }
-            
-        case UntypedUse: {
-            // This is pretty weird, since we duplicate the slow path both here and in the
-            // code generated by the IC. We should investigate making this less bad.
-            // https://bugs.webkit.org/show_bug.cgi?id=127830
-            LValue value = lowJSValue(m_node-&gt;child1());
-            
-            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;GetById untyped cell case&quot;));
-            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;GetById untyped not cell case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetById untyped continuation&quot;));
-            
-            m_out.branch(
-                isCell(value, provenType(m_node-&gt;child1())), unsure(cellCase), unsure(notCellCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
-            ValueFromBlock cellResult = m_out.anchor(getById(value));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(notCellCase, continuation);
-            ValueFromBlock notCellResult = m_out.anchor(vmCall(
-                m_out.int64, m_out.operation(operationGetByIdGeneric),
-                m_callFrame, value,
-                m_out.constIntPtr(m_graph.identifiers()[m_node-&gt;identifierNumber()])));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, cellResult, notCellResult));
-            return;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            return;
-        }
-    }
-    
-    void compilePutById()
-    {
-        Node* node = m_node;
-        
-        // See above; CellUse is easier so we do only that for now.
-        ASSERT(node-&gt;child1().useKind() == CellUse);
-
-        LValue base = lowCell(node-&gt;child1());
-        LValue value = lowJSValue(node-&gt;child2());
-        auto uid = m_graph.identifiers()[node-&gt;identifierNumber()];
-
-        B3::PatchpointValue* patchpoint = m_out.patchpoint(Void);
-        patchpoint-&gt;appendSomeRegister(base);
-        patchpoint-&gt;appendSomeRegister(value);
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-
-        // FIXME: If this is a PutByIdFlush, we might want to late-clobber volatile registers.
-        // https://bugs.webkit.org/show_bug.cgi?id=152848
-
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-
-        State* state = &amp;m_ftlState;
-        ECMAMode ecmaMode = m_graph.executableFor(node-&gt;origin.semantic)-&gt;ecmaMode();
-        
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                CallSiteIndex callSiteIndex =
-                    state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(node-&gt;origin.semantic);
-
-                Box&lt;CCallHelpers::JumpList&gt; exceptions =
-                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
-
-                // JS setter call ICs generated by the PutById IC will need this.
-                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
-
-                auto generator = Box&lt;JITPutByIdGenerator&gt;::create(
-                    jit.codeBlock(), node-&gt;origin.semantic, callSiteIndex,
-                    params.unavailableRegisters(), JSValueRegs(params[0].gpr()),
-                    JSValueRegs(params[1].gpr()), GPRInfo::patchpointScratchRegister, ecmaMode,
-                    node-&gt;op() == PutByIdDirect ? Direct : NotDirect);
-
-                generator-&gt;generateFastPath(jit);
-                CCallHelpers::Label done = jit.label();
-
-                params.addLatePath(
-                    [=] (CCallHelpers&amp; jit) {
-                        AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                        generator-&gt;slowPathJump().link(&amp;jit);
-                        CCallHelpers::Label slowPathBegin = jit.label();
-                        CCallHelpers::Call slowPathCall = callOperation(
-                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
-                            exceptions.get(), generator-&gt;slowPathFunction(), InvalidGPRReg,
-                            CCallHelpers::TrustedImmPtr(generator-&gt;stubInfo()), params[1].gpr(),
-                            params[0].gpr(), CCallHelpers::TrustedImmPtr(uid)).call();
-                        jit.jump().linkTo(done, &amp;jit);
-
-                        generator-&gt;reportSlowPathCall(slowPathBegin, slowPathCall);
-
-                        jit.addLinkTask(
-                            [=] (LinkBuffer&amp; linkBuffer) {
-                                generator-&gt;finalize(linkBuffer);
-                            });
-                    });
-            });
-    }
-    
-    void compileGetButterfly()
-    {
-        setStorage(loadButterflyWithBarrier(lowCell(m_node-&gt;child1())));
-    }
-
-    void compileGetButterflyReadOnly()
-    {
-        setStorage(loadButterflyReadOnly(lowCell(m_node-&gt;child1())));
-    }
-    
-    void compileConstantStoragePointer()
-    {
-        setStorage(m_out.constIntPtr(m_node-&gt;storagePointer()));
-    }
-    
-    void compileGetIndexedPropertyStorage()
-    {
-        LValue cell = lowCell(m_node-&gt;child1());
-        
-        if (m_node-&gt;arrayMode().type() == Array::String) {
-            LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;GetIndexedPropertyStorage String slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetIndexedPropertyStorage String continuation&quot;));
-
-            LValue fastResultValue = m_out.loadPtr(cell, m_heaps.JSString_value);
-            ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-            
-            m_out.branch(
-                m_out.notNull(fastResultValue), usually(continuation), rarely(slowPath));
-            
-            LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
-            
-            ValueFromBlock slowResult = m_out.anchor(
-                vmCall(m_out.intPtr, m_out.operation(operationResolveRope), m_callFrame, cell));
-            
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            
-            setStorage(m_out.loadPtr(m_out.phi(m_out.intPtr, fastResult, slowResult), m_heaps.StringImpl_data));
-            return;
-        }
-        
-        setStorage(loadVectorWithBarrier(cell));
-    }
-    
-    void compileCheckArray()
-    {
-        Edge edge = m_node-&gt;child1();
-        LValue cell = lowCell(edge);
-        
-        if (m_node-&gt;arrayMode().alreadyChecked(m_graph, m_node, abstractValue(edge)))
-            return;
-        
-        speculate(
-            BadIndexingType, jsValueValue(cell), 0,
-            m_out.logicalNot(isArrayType(cell, m_node-&gt;arrayMode())));
-    }
-
-    void compileGetTypedArrayByteOffset()
-    {
-        LValue basePtr = lowCell(m_node-&gt;child1());    
-
-        LBasicBlock simpleCase = FTL_NEW_BLOCK(m_out, (&quot;GetTypedArrayByteOffset wasteless typed array&quot;));
-        LBasicBlock wastefulCase = FTL_NEW_BLOCK(m_out, (&quot;GetTypedArrayByteOffset wasteful typed array&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetTypedArrayByteOffset continuation&quot;));
-        
-        LValue mode = m_out.load32(basePtr, m_heaps.JSArrayBufferView_mode);
-        m_out.branch(
-            m_out.notEqual(mode, m_out.constInt32(WastefulTypedArray)),
-            unsure(simpleCase), unsure(wastefulCase));
-
-        LBasicBlock lastNext = m_out.appendTo(simpleCase, wastefulCase);
-
-        ValueFromBlock simpleOut = m_out.anchor(m_out.constIntPtr(0));
-
-        m_out.jump(continuation);
-
-        m_out.appendTo(wastefulCase, continuation);
-
-        LValue vectorPtr = loadVectorReadOnly(basePtr);
-        LValue butterflyPtr = loadButterflyReadOnly(basePtr);
-        LValue arrayBufferPtr = m_out.loadPtr(butterflyPtr, m_heaps.Butterfly_arrayBuffer);
-        LValue dataPtr = m_out.loadPtr(arrayBufferPtr, m_heaps.ArrayBuffer_data);
-
-        ValueFromBlock wastefulOut = m_out.anchor(m_out.sub(vectorPtr, dataPtr));
-
-        m_out.jump(continuation);
-        m_out.appendTo(continuation, lastNext);
-
-        setInt32(m_out.castToInt32(m_out.phi(m_out.intPtr, simpleOut, wastefulOut)));
-    }
-    
-    void compileGetArrayLength()
-    {
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-        case Array::Double:
-        case Array::Contiguous: {
-            setInt32(m_out.load32NonNegative(lowStorage(m_node-&gt;child2()), m_heaps.Butterfly_publicLength));
-            return;
-        }
-            
-        case Array::String: {
-            LValue string = lowCell(m_node-&gt;child1());
-            setInt32(m_out.load32NonNegative(string, m_heaps.JSString_length));
-            return;
-        }
-            
-        case Array::DirectArguments: {
-            LValue arguments = lowCell(m_node-&gt;child1());
-            speculate(
-                ExoticObjectMode, noValue(), nullptr,
-                m_out.notNull(m_out.loadPtr(arguments, m_heaps.DirectArguments_overrides)));
-            setInt32(m_out.load32NonNegative(arguments, m_heaps.DirectArguments_length));
-            return;
-        }
-            
-        case Array::ScopedArguments: {
-            LValue arguments = lowCell(m_node-&gt;child1());
-            speculate(
-                ExoticObjectMode, noValue(), nullptr,
-                m_out.notZero32(m_out.load8ZeroExt32(arguments, m_heaps.ScopedArguments_overrodeThings)));
-            setInt32(m_out.load32NonNegative(arguments, m_heaps.ScopedArguments_totalLength));
-            return;
-        }
-            
-        default:
-            if (m_node-&gt;arrayMode().isSomeTypedArrayView()) {
-                setInt32(
-                    m_out.load32NonNegative(lowCell(m_node-&gt;child1()), m_heaps.JSArrayBufferView_length));
-                return;
-            }
-            
-            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            return;
-        }
-    }
-    
-    void compileCheckInBounds()
-    {
-        speculate(
-            OutOfBounds, noValue(), 0,
-            m_out.aboveOrEqual(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
-    }
-    
-    void compileGetByVal()
-    {
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-        case Array::Contiguous: {
-            LValue index = lowInt32(m_node-&gt;child2());
-            LValue storage = lowStorage(m_node-&gt;child3());
-            
-            IndexedAbstractHeap&amp; heap = m_node-&gt;arrayMode().type() == Array::Int32 ?
-                m_heaps.indexedInt32Properties : m_heaps.indexedContiguousProperties;
-            
-            if (m_node-&gt;arrayMode().isInBounds()) {
-                LValue result = m_out.load64(baseIndex(heap, storage, index, m_node-&gt;child2()));
-                LValue isHole = m_out.isZero64(result);
-                if (m_node-&gt;arrayMode().isSaneChain()) {
-                    DFG_ASSERT(
-                        m_graph, m_node, m_node-&gt;arrayMode().type() == Array::Contiguous);
-                    result = m_out.select(
-                        isHole, m_out.constInt64(JSValue::encode(jsUndefined())), result);
-                } else
-                    speculate(LoadFromHole, noValue(), 0, isHole);
-                setJSValue(result);
-                return;
-            }
-            
-            LValue base = lowCell(m_node-&gt;child1());
-            
-            LBasicBlock fastCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal int/contiguous fast case&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal int/contiguous slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal int/contiguous continuation&quot;));
-            
-            m_out.branch(
-                m_out.aboveOrEqual(
-                    index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
-                rarely(slowCase), usually(fastCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(fastCase, slowCase);
-
-            LValue fastResultValue = m_out.load64(baseIndex(heap, storage, index, m_node-&gt;child2()));
-            ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-            m_out.branch(
-                m_out.isZero64(fastResultValue), rarely(slowCase), usually(continuation));
-            
-            m_out.appendTo(slowCase, continuation);
-            ValueFromBlock slowResult = m_out.anchor(
-                vmCall(m_out.int64, m_out.operation(operationGetByValArrayInt), m_callFrame, base, index));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
-            return;
-        }
-            
-        case Array::Double: {
-            LValue index = lowInt32(m_node-&gt;child2());
-            LValue storage = lowStorage(m_node-&gt;child3());
-            
-            IndexedAbstractHeap&amp; heap = m_heaps.indexedDoubleProperties;
-            
-            if (m_node-&gt;arrayMode().isInBounds()) {
-                LValue result = m_out.loadDouble(
-                    baseIndex(heap, storage, index, m_node-&gt;child2()));
-                
-                if (!m_node-&gt;arrayMode().isSaneChain()) {
-                    speculate(
-                        LoadFromHole, noValue(), 0,
-                        m_out.doubleNotEqualOrUnordered(result, result));
-                }
-                setDouble(result);
-                break;
-            }
-            
-            LValue base = lowCell(m_node-&gt;child1());
-            
-            LBasicBlock inBounds = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double in bounds&quot;));
-            LBasicBlock boxPath = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double boxing&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal double continuation&quot;));
-            
-            m_out.branch(
-                m_out.aboveOrEqual(
-                    index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
-                rarely(slowCase), usually(inBounds));
-            
-            LBasicBlock lastNext = m_out.appendTo(inBounds, boxPath);
-            LValue doubleValue = m_out.loadDouble(
-                baseIndex(heap, storage, index, m_node-&gt;child2()));
-            m_out.branch(
-                m_out.doubleNotEqualOrUnordered(doubleValue, doubleValue),
-                rarely(slowCase), usually(boxPath));
-            
-            m_out.appendTo(boxPath, slowCase);
-            ValueFromBlock fastResult = m_out.anchor(boxDouble(doubleValue));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(slowCase, continuation);
-            ValueFromBlock slowResult = m_out.anchor(
-                vmCall(m_out.int64, m_out.operation(operationGetByValArrayInt), m_callFrame, base, index));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
-            return;
-        }
-
-        case Array::Undecided: {
-            LValue index = lowInt32(m_node-&gt;child2());
-
-            speculate(OutOfBounds, noValue(), m_node, m_out.lessThan(index, m_out.int32Zero));
-            setJSValue(m_out.constInt64(ValueUndefined));
-            return;
-        }
-            
-        case Array::DirectArguments: {
-            LValue base = lowCell(m_node-&gt;child1());
-            LValue index = lowInt32(m_node-&gt;child2());
-            
-            speculate(
-                ExoticObjectMode, noValue(), nullptr,
-                m_out.notNull(m_out.loadPtr(base, m_heaps.DirectArguments_overrides)));
-            speculate(
-                ExoticObjectMode, noValue(), nullptr,
-                m_out.aboveOrEqual(
-                    index,
-                    m_out.load32NonNegative(base, m_heaps.DirectArguments_length)));
-
-            TypedPointer address = m_out.baseIndex(
-                m_heaps.DirectArguments_storage, base, m_out.zeroExtPtr(index));
-            setJSValue(m_out.load64(address));
-            return;
-        }
-            
-        case Array::ScopedArguments: {
-            LValue base = lowCell(m_node-&gt;child1());
-            LValue index = lowInt32(m_node-&gt;child2());
-            
-            speculate(
-                ExoticObjectMode, noValue(), nullptr,
-                m_out.aboveOrEqual(
-                    index,
-                    m_out.load32NonNegative(base, m_heaps.ScopedArguments_totalLength)));
-            
-            LValue table = m_out.loadPtr(base, m_heaps.ScopedArguments_table);
-            LValue namedLength = m_out.load32(table, m_heaps.ScopedArgumentsTable_length);
-            
-            LBasicBlock namedCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal ScopedArguments named case&quot;));
-            LBasicBlock overflowCase = FTL_NEW_BLOCK(m_out, (&quot;GetByVal ScopedArguments overflow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal ScopedArguments continuation&quot;));
-            
-            m_out.branch(
-                m_out.aboveOrEqual(index, namedLength), unsure(overflowCase), unsure(namedCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(namedCase, overflowCase);
-            
-            LValue scope = m_out.loadPtr(base, m_heaps.ScopedArguments_scope);
-            LValue arguments = m_out.loadPtr(table, m_heaps.ScopedArgumentsTable_arguments);
-            
-            TypedPointer address = m_out.baseIndex(
-                m_heaps.scopedArgumentsTableArguments, arguments, m_out.zeroExtPtr(index));
-            LValue scopeOffset = m_out.load32(address);
-            
-            speculate(
-                ExoticObjectMode, noValue(), nullptr,
-                m_out.equal(scopeOffset, m_out.constInt32(ScopeOffset::invalidOffset)));
-            
-            address = m_out.baseIndex(
-                m_heaps.JSEnvironmentRecord_variables, scope, m_out.zeroExtPtr(scopeOffset));
-            ValueFromBlock namedResult = m_out.anchor(m_out.load64(address));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(overflowCase, continuation);
-            
-            address = m_out.baseIndex(
-                m_heaps.ScopedArguments_overflowStorage, base,
-                m_out.zeroExtPtr(m_out.sub(index, namedLength)));
-            LValue overflowValue = m_out.load64(address);
-            speculate(ExoticObjectMode, noValue(), nullptr, m_out.isZero64(overflowValue));
-            ValueFromBlock overflowResult = m_out.anchor(overflowValue);
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, namedResult, overflowResult));
-            return;
-        }
-            
-        case Array::Generic: {
-            setJSValue(vmCall(
-                m_out.int64, m_out.operation(operationGetByVal), m_callFrame,
-                lowJSValue(m_node-&gt;child1()), lowJSValue(m_node-&gt;child2())));
-            return;
-        }
-            
-        case Array::String: {
-            compileStringCharAt();
-            return;
-        }
-            
-        default: {
-            LValue index = lowInt32(m_node-&gt;child2());
-            LValue storage = lowStorage(m_node-&gt;child3());
-            
-            TypedArrayType type = m_node-&gt;arrayMode().typedArrayType();
-            
-            if (isTypedView(type)) {
-                TypedPointer pointer = TypedPointer(
-                    m_heaps.typedArrayProperties,
-                    m_out.add(
-                        storage,
-                        m_out.shl(
-                            m_out.zeroExtPtr(index),
-                            m_out.constIntPtr(logElementSize(type)))));
-                
-                if (isInt(type)) {
-                    LValue result;
-                    switch (elementSize(type)) {
-                    case 1:
-                        result = isSigned(type) ? m_out.load8SignExt32(pointer) :  m_out.load8ZeroExt32(pointer);
-                        break;
-                    case 2:
-                        result = isSigned(type) ? m_out.load16SignExt32(pointer) :  m_out.load16ZeroExt32(pointer);
-                        break;
-                    case 4:
-                        result = m_out.load32(pointer);
-                        break;
-                    default:
-                        DFG_CRASH(m_graph, m_node, &quot;Bad element size&quot;);
-                    }
-                    
-                    if (elementSize(type) &lt; 4 || isSigned(type)) {
-                        setInt32(result);
-                        return;
-                    }
-
-                    if (m_node-&gt;shouldSpeculateInt32()) {
-                        speculate(
-                            Overflow, noValue(), 0, m_out.lessThan(result, m_out.int32Zero));
-                        setInt32(result);
-                        return;
-                    }
-                    
-                    if (m_node-&gt;shouldSpeculateMachineInt()) {
-                        setStrictInt52(m_out.zeroExt(result, m_out.int64));
-                        return;
-                    }
-                    
-                    setDouble(m_out.unsignedToDouble(result));
-                    return;
-                }
-            
-                ASSERT(isFloat(type));
-                
-                LValue result;
-                switch (type) {
-                case TypeFloat32:
-                    result = m_out.floatToDouble(m_out.loadFloat(pointer));
-                    break;
-                case TypeFloat64:
-                    result = m_out.loadDouble(pointer);
-                    break;
-                default:
-                    DFG_CRASH(m_graph, m_node, &quot;Bad typed array type&quot;);
-                }
-                
-                setDouble(result);
-                return;
-            }
-            
-            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            return;
-        } }
-    }
-    
-    void compileGetMyArgumentByVal()
-    {
-        InlineCallFrame* inlineCallFrame = m_node-&gt;child1()-&gt;origin.semantic.inlineCallFrame;
-        
-        LValue index = lowInt32(m_node-&gt;child2());
-        
-        LValue limit;
-        if (inlineCallFrame &amp;&amp; !inlineCallFrame-&gt;isVarargs())
-            limit = m_out.constInt32(inlineCallFrame-&gt;arguments.size() - 1);
-        else {
-            VirtualRegister argumentCountRegister;
-            if (!inlineCallFrame)
-                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
-            else
-                argumentCountRegister = inlineCallFrame-&gt;argumentCountRegister;
-            limit = m_out.sub(m_out.load32(payloadFor(argumentCountRegister)), m_out.int32One);
-        }
-        
-        speculate(ExoticObjectMode, noValue(), 0, m_out.aboveOrEqual(index, limit));
-        
-        TypedPointer base;
-        if (inlineCallFrame) {
-            if (inlineCallFrame-&gt;arguments.size() &lt;= 1) {
-                // We should have already exited due to the bounds check, above. Just tell the
-                // compiler that anything dominated by this instruction is not reachable, so
-                // that we don't waste time generating such code. This will also plant some
-                // kind of crashing instruction so that if by some fluke the bounds check didn't
-                // work, we'll crash in an easy-to-see way.
-                didAlreadyTerminate();
-                return;
-            }
-            base = addressFor(inlineCallFrame-&gt;arguments[1].virtualRegister());
-        } else
-            base = addressFor(virtualRegisterForArgument(1));
-        
-        LValue pointer = m_out.baseIndex(
-            base.value(), m_out.zeroExt(index, m_out.intPtr), ScaleEight);
-        setJSValue(m_out.load64(TypedPointer(m_heaps.variables.atAnyIndex(), pointer)));
-    }
-    
-    void compilePutByVal()
-    {
-        Edge child1 = m_graph.varArgChild(m_node, 0);
-        Edge child2 = m_graph.varArgChild(m_node, 1);
-        Edge child3 = m_graph.varArgChild(m_node, 2);
-        Edge child4 = m_graph.varArgChild(m_node, 3);
-        Edge child5 = m_graph.varArgChild(m_node, 4);
-        
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Generic: {
-            V_JITOperation_EJJJ operation;
-            if (m_node-&gt;op() == PutByValDirect) {
-                if (m_graph.isStrictModeFor(m_node-&gt;origin.semantic))
-                    operation = operationPutByValDirectStrict;
-                else
-                    operation = operationPutByValDirectNonStrict;
-            } else {
-                if (m_graph.isStrictModeFor(m_node-&gt;origin.semantic))
-                    operation = operationPutByValStrict;
-                else
-                    operation = operationPutByValNonStrict;
-            }
-                
-            vmCall(
-                m_out.voidType, m_out.operation(operation), m_callFrame,
-                lowJSValue(child1), lowJSValue(child2), lowJSValue(child3));
-            return;
-        }
-            
-        default:
-            break;
-        }
-
-        LValue base = lowCell(child1);
-        LValue index = lowInt32(child2);
-        LValue storage = lowStorage(child4);
-        
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-        case Array::Double:
-        case Array::Contiguous: {
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal continuation&quot;));
-            LBasicBlock outerLastNext = m_out.appendTo(m_out.m_block, continuation);
-            
-            switch (m_node-&gt;arrayMode().type()) {
-            case Array::Int32:
-            case Array::Contiguous: {
-                LValue value = lowJSValue(child3, ManualOperandSpeculation);
-                
-                if (m_node-&gt;arrayMode().type() == Array::Int32)
-                    FTL_TYPE_CHECK(jsValueValue(value), child3, SpecInt32, isNotInt32(value));
-                
-                TypedPointer elementPointer = m_out.baseIndex(
-                    m_node-&gt;arrayMode().type() == Array::Int32 ?
-                    m_heaps.indexedInt32Properties : m_heaps.indexedContiguousProperties,
-                    storage, m_out.zeroExtPtr(index), provenValue(child2));
-                
-                if (m_node-&gt;op() == PutByValAlias) {
-                    m_out.store64(value, elementPointer);
-                    break;
-                }
-                
-                contiguousPutByValOutOfBounds(
-                    codeBlock()-&gt;isStrictMode()
-                    ? operationPutByValBeyondArrayBoundsStrict
-                    : operationPutByValBeyondArrayBoundsNonStrict,
-                    base, storage, index, value, continuation);
-                
-                m_out.store64(value, elementPointer);
-                break;
-            }
-                
-            case Array::Double: {
-                LValue value = lowDouble(child3);
-                
-                FTL_TYPE_CHECK(
-                    doubleValue(value), child3, SpecDoubleReal,
-                    m_out.doubleNotEqualOrUnordered(value, value));
-                
-                TypedPointer elementPointer = m_out.baseIndex(
-                    m_heaps.indexedDoubleProperties, storage, m_out.zeroExtPtr(index),
-                    provenValue(child2));
-                
-                if (m_node-&gt;op() == PutByValAlias) {
-                    m_out.storeDouble(value, elementPointer);
-                    break;
-                }
-                
-                contiguousPutByValOutOfBounds(
-                    codeBlock()-&gt;isStrictMode()
-                    ? operationPutDoubleByValBeyondArrayBoundsStrict
-                    : operationPutDoubleByValBeyondArrayBoundsNonStrict,
-                    base, storage, index, value, continuation);
-                
-                m_out.storeDouble(value, elementPointer);
-                break;
-            }
-                
-            default:
-                DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            }
-
-            m_out.jump(continuation);
-            m_out.appendTo(continuation, outerLastNext);
-            return;
-        }
-            
-        default:
-            TypedArrayType type = m_node-&gt;arrayMode().typedArrayType();
-            
-            if (isTypedView(type)) {
-                TypedPointer pointer = TypedPointer(
-                    m_heaps.typedArrayProperties,
-                    m_out.add(
-                        storage,
-                        m_out.shl(
-                            m_out.zeroExt(index, m_out.intPtr),
-                            m_out.constIntPtr(logElementSize(type)))));
-                
-                Output::StoreType storeType;
-                LValue valueToStore;
-                
-                if (isInt(type)) {
-                    LValue intValue;
-                    switch (child3.useKind()) {
-                    case Int52RepUse:
-                    case Int32Use: {
-                        if (child3.useKind() == Int32Use)
-                            intValue = lowInt32(child3);
-                        else
-                            intValue = m_out.castToInt32(lowStrictInt52(child3));
-
-                        if (isClamped(type)) {
-                            ASSERT(elementSize(type) == 1);
-                            
-                            LBasicBlock atLeastZero = FTL_NEW_BLOCK(m_out, (&quot;PutByVal int clamp atLeastZero&quot;));
-                            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal int clamp continuation&quot;));
-                            
-                            Vector&lt;ValueFromBlock, 2&gt; intValues;
-                            intValues.append(m_out.anchor(m_out.int32Zero));
-                            m_out.branch(
-                                m_out.lessThan(intValue, m_out.int32Zero),
-                                unsure(continuation), unsure(atLeastZero));
-                            
-                            LBasicBlock lastNext = m_out.appendTo(atLeastZero, continuation);
-                            
-                            intValues.append(m_out.anchor(m_out.select(
-                                m_out.greaterThan(intValue, m_out.constInt32(255)),
-                                m_out.constInt32(255),
-                                intValue)));
-                            m_out.jump(continuation);
-                            
-                            m_out.appendTo(continuation, lastNext);
-                            intValue = m_out.phi(m_out.int32, intValues);
-                        }
-                        break;
-                    }
-                        
-                    case DoubleRepUse: {
-                        LValue doubleValue = lowDouble(child3);
-                        
-                        if (isClamped(type)) {
-                            ASSERT(elementSize(type) == 1);
-                            
-                            LBasicBlock atLeastZero = FTL_NEW_BLOCK(m_out, (&quot;PutByVal double clamp atLeastZero&quot;));
-                            LBasicBlock withinRange = FTL_NEW_BLOCK(m_out, (&quot;PutByVal double clamp withinRange&quot;));
-                            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal double clamp continuation&quot;));
-                            
-                            Vector&lt;ValueFromBlock, 3&gt; intValues;
-                            intValues.append(m_out.anchor(m_out.int32Zero));
-                            m_out.branch(
-                                m_out.doubleLessThanOrUnordered(doubleValue, m_out.doubleZero),
-                                unsure(continuation), unsure(atLeastZero));
-                            
-                            LBasicBlock lastNext = m_out.appendTo(atLeastZero, withinRange);
-                            intValues.append(m_out.anchor(m_out.constInt32(255)));
-                            m_out.branch(
-                                m_out.doubleGreaterThan(doubleValue, m_out.constDouble(255)),
-                                unsure(continuation), unsure(withinRange));
-                            
-                            m_out.appendTo(withinRange, continuation);
-                            intValues.append(m_out.anchor(m_out.doubleToInt(doubleValue)));
-                            m_out.jump(continuation);
-                            
-                            m_out.appendTo(continuation, lastNext);
-                            intValue = m_out.phi(m_out.int32, intValues);
-                        } else
-                            intValue = doubleToInt32(doubleValue);
-                        break;
-                    }
-                        
-                    default:
-                        DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-                    }
-
-                    valueToStore = intValue;
-                    switch (elementSize(type)) {
-                    case 1:
-                        storeType = Output::Store32As8;
-                        break;
-                    case 2:
-                        storeType = Output::Store32As16;
-                        break;
-                    case 4:
-                        storeType = Output::Store32;
-                        break;
-                    default:
-                        DFG_CRASH(m_graph, m_node, &quot;Bad element size&quot;);
-                    }
-                } else /* !isInt(type) */ {
-                    LValue value = lowDouble(child3);
-                    switch (type) {
-                    case TypeFloat32:
-                        valueToStore = m_out.doubleToFloat(value);
-                        storeType = Output::StoreFloat;
-                        break;
-                    case TypeFloat64:
-                        valueToStore = value;
-                        storeType = Output::StoreDouble;
-                        break;
-                    default:
-                        DFG_CRASH(m_graph, m_node, &quot;Bad typed array type&quot;);
-                    }
-                }
-                
-                if (m_node-&gt;arrayMode().isInBounds() || m_node-&gt;op() == PutByValAlias)
-                    m_out.store(valueToStore, pointer, storeType);
-                else {
-                    LBasicBlock isInBounds = FTL_NEW_BLOCK(m_out, (&quot;PutByVal typed array in bounds case&quot;));
-                    LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;PutByVal typed array continuation&quot;));
-                    
-                    m_out.branch(
-                        m_out.aboveOrEqual(index, lowInt32(child5)),
-                        unsure(continuation), unsure(isInBounds));
-                    
-                    LBasicBlock lastNext = m_out.appendTo(isInBounds, continuation);
-                    m_out.store(valueToStore, pointer, storeType);
-                    m_out.jump(continuation);
-                    
-                    m_out.appendTo(continuation, lastNext);
-                }
-                
-                return;
-            }
-            
-            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            break;
-        }
-    }
-
-    void compilePutAccessorById()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue accessor = lowCell(m_node-&gt;child2());
-        auto uid = m_graph.identifiers()[m_node-&gt;identifierNumber()];
-        vmCall(
-            m_out.voidType,
-            m_out.operation(m_node-&gt;op() == PutGetterById ? operationPutGetterById : operationPutSetterById),
-            m_callFrame, base, m_out.constIntPtr(uid), m_out.constInt32(m_node-&gt;accessorAttributes()), accessor);
-    }
-
-    void compilePutGetterSetterById()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue getter = lowJSValue(m_node-&gt;child2());
-        LValue setter = lowJSValue(m_node-&gt;child3());
-        auto uid = m_graph.identifiers()[m_node-&gt;identifierNumber()];
-        vmCall(
-            m_out.voidType, m_out.operation(operationPutGetterSetter),
-            m_callFrame, base, m_out.constIntPtr(uid), m_out.constInt32(m_node-&gt;accessorAttributes()), getter, setter);
-
-    }
-
-    void compilePutAccessorByVal()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue subscript = lowJSValue(m_node-&gt;child2());
-        LValue accessor = lowCell(m_node-&gt;child3());
-        vmCall(
-            m_out.voidType,
-            m_out.operation(m_node-&gt;op() == PutGetterByVal ? operationPutGetterByVal : operationPutSetterByVal),
-            m_callFrame, base, subscript, m_out.constInt32(m_node-&gt;accessorAttributes()), accessor);
-    }
-    
-    void compileArrayPush()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue storage = lowStorage(m_node-&gt;child3());
-        
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-        case Array::Contiguous:
-        case Array::Double: {
-            LValue value;
-            Output::StoreType storeType;
-            
-            if (m_node-&gt;arrayMode().type() != Array::Double) {
-                value = lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation);
-                if (m_node-&gt;arrayMode().type() == Array::Int32) {
-                    FTL_TYPE_CHECK(
-                        jsValueValue(value), m_node-&gt;child2(), SpecInt32, isNotInt32(value));
-                }
-                storeType = Output::Store64;
-            } else {
-                value = lowDouble(m_node-&gt;child2());
-                FTL_TYPE_CHECK(
-                    doubleValue(value), m_node-&gt;child2(), SpecDoubleReal,
-                    m_out.doubleNotEqualOrUnordered(value, value));
-                storeType = Output::StoreDouble;
-            }
-            
-            IndexedAbstractHeap&amp; heap = m_heaps.forArrayType(m_node-&gt;arrayMode().type());
-
-            LValue prevLength = m_out.load32(storage, m_heaps.Butterfly_publicLength);
-            
-            LBasicBlock fastPath = FTL_NEW_BLOCK(m_out, (&quot;ArrayPush fast path&quot;));
-            LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;ArrayPush slow path&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArrayPush continuation&quot;));
-            
-            m_out.branch(
-                m_out.aboveOrEqual(
-                    prevLength, m_out.load32(storage, m_heaps.Butterfly_vectorLength)),
-                unsure(slowPath), unsure(fastPath));
-            
-            LBasicBlock lastNext = m_out.appendTo(fastPath, slowPath);
-            m_out.store(
-                value, m_out.baseIndex(heap, storage, m_out.zeroExtPtr(prevLength)), storeType);
-            LValue newLength = m_out.add(prevLength, m_out.int32One);
-            m_out.store32(newLength, storage, m_heaps.Butterfly_publicLength);
-            
-            ValueFromBlock fastResult = m_out.anchor(boxInt32(newLength));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(slowPath, continuation);
-            LValue operation;
-            if (m_node-&gt;arrayMode().type() != Array::Double)
-                operation = m_out.operation(operationArrayPush);
-            else
-                operation = m_out.operation(operationArrayPushDouble);
-            ValueFromBlock slowResult = m_out.anchor(
-                vmCall(m_out.int64, operation, m_callFrame, value, base));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
-            return;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            return;
-        }
-    }
-    
-    void compileArrayPop()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue storage = lowStorage(m_node-&gt;child2());
-        
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-        case Array::Double:
-        case Array::Contiguous: {
-            IndexedAbstractHeap&amp; heap = m_heaps.forArrayType(m_node-&gt;arrayMode().type());
-            
-            LBasicBlock fastCase = FTL_NEW_BLOCK(m_out, (&quot;ArrayPop fast case&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;ArrayPop slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ArrayPop continuation&quot;));
-            
-            LValue prevLength = m_out.load32(storage, m_heaps.Butterfly_publicLength);
-            
-            Vector&lt;ValueFromBlock, 3&gt; results;
-            results.append(m_out.anchor(m_out.constInt64(JSValue::encode(jsUndefined()))));
-            m_out.branch(
-                m_out.isZero32(prevLength), rarely(continuation), usually(fastCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(fastCase, slowCase);
-            LValue newLength = m_out.sub(prevLength, m_out.int32One);
-            m_out.store32(newLength, storage, m_heaps.Butterfly_publicLength);
-            TypedPointer pointer = m_out.baseIndex(heap, storage, m_out.zeroExtPtr(newLength));
-            if (m_node-&gt;arrayMode().type() != Array::Double) {
-                LValue result = m_out.load64(pointer);
-                m_out.store64(m_out.int64Zero, pointer);
-                results.append(m_out.anchor(result));
-                m_out.branch(
-                    m_out.notZero64(result), usually(continuation), rarely(slowCase));
-            } else {
-                LValue result = m_out.loadDouble(pointer);
-                m_out.store64(m_out.constInt64(bitwise_cast&lt;int64_t&gt;(PNaN)), pointer);
-                results.append(m_out.anchor(boxDouble(result)));
-                m_out.branch(
-                    m_out.doubleEqual(result, result),
-                    usually(continuation), rarely(slowCase));
-            }
-            
-            m_out.appendTo(slowCase, continuation);
-            results.append(m_out.anchor(vmCall(
-                m_out.int64, m_out.operation(operationArrayPopAndRecoverLength), m_callFrame, base)));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, results));
-            return;
-        }
-
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad array type&quot;);
-            return;
-        }
-    }
-    
-    void compileCreateActivation()
-    {
-        LValue scope = lowCell(m_node-&gt;child1());
-        SymbolTable* table = m_node-&gt;castOperand&lt;SymbolTable*&gt;();
-        Structure* structure = m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;activationStructure();
-        JSValue initializationValue = m_node-&gt;initializationValueForActivation();
-        ASSERT(initializationValue.isUndefined() || initializationValue == jsTDZValue());
-        if (table-&gt;singletonScope()-&gt;isStillValid()) {
-            LValue callResult = vmCall(
-                m_out.int64,
-                m_out.operation(operationCreateActivationDirect), m_callFrame, weakPointer(structure),
-                scope, weakPointer(table), m_out.constInt64(JSValue::encode(initializationValue)));
-            setJSValue(callResult);
-            return;
-        }
-        
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;CreateActivation slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CreateActivation continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        LValue fastObject = allocateObject&lt;JSLexicalEnvironment&gt;(
-            JSLexicalEnvironment::allocationSize(table), structure, m_out.intPtrZero, slowPath);
-        
-        // We don't need memory barriers since we just fast-created the activation, so the
-        // activation must be young.
-        m_out.storePtr(scope, fastObject, m_heaps.JSScope_next);
-        m_out.storePtr(weakPointer(table), fastObject, m_heaps.JSSymbolTableObject_symbolTable);
-        
-        for (unsigned i = 0; i &lt; table-&gt;scopeSize(); ++i) {
-            m_out.store64(
-                m_out.constInt64(JSValue::encode(initializationValue)),
-                fastObject, m_heaps.JSEnvironmentRecord_variables[i]);
-        }
-        
-        ValueFromBlock fastResult = m_out.anchor(fastObject);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-        LValue callResult = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationCreateActivationDirect, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
-                    CCallHelpers::TrustedImmPtr(table),
-                    CCallHelpers::TrustedImm64(JSValue::encode(initializationValue)));
-            },
-            scope);
-        ValueFromBlock slowResult = m_out.anchor(callResult);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
-    }
-    
-    void compileNewFunction()
-    {
-        ASSERT(m_node-&gt;op() == NewFunction || m_node-&gt;op() == NewArrowFunction || m_node-&gt;op() == NewGeneratorFunction);
-        bool isGeneratorFunction = m_node-&gt;op() == NewGeneratorFunction;
-        
-        LValue scope = lowCell(m_node-&gt;child1());
-        
-        FunctionExecutable* executable = m_node-&gt;castOperand&lt;FunctionExecutable*&gt;();
-        if (executable-&gt;singletonFunction()-&gt;isStillValid()) {
-            LValue callResult =
-                isGeneratorFunction ? vmCall(m_out.int64, m_out.operation(operationNewGeneratorFunction), m_callFrame, scope, weakPointer(executable)) :
-                vmCall(m_out.int64, m_out.operation(operationNewFunction), m_callFrame, scope, weakPointer(executable));
-            setJSValue(callResult);
-            return;
-        }
-        
-        Structure* structure =
-            isGeneratorFunction ? m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;generatorFunctionStructure() :
-            m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;functionStructure();
-        
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;NewFunction slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NewFunction continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        LValue fastObject =
-            isGeneratorFunction ? allocateObject&lt;JSGeneratorFunction&gt;(structure, m_out.intPtrZero, slowPath) :
-            allocateObject&lt;JSFunction&gt;(structure, m_out.intPtrZero, slowPath);
-        
-        
-        // We don't need memory barriers since we just fast-created the function, so it
-        // must be young.
-        m_out.storePtr(scope, fastObject, m_heaps.JSFunction_scope);
-        m_out.storePtr(weakPointer(executable), fastObject, m_heaps.JSFunction_executable);
-        m_out.storePtr(m_out.intPtrZero, fastObject, m_heaps.JSFunction_rareData);
-        
-        ValueFromBlock fastResult = m_out.anchor(fastObject);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-
-        Vector&lt;LValue&gt; slowPathArguments;
-        slowPathArguments.append(scope);
-        LValue callResult = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                if (isGeneratorFunction) {
-                    return createLazyCallGenerator(
-                        operationNewGeneratorFunctionWithInvalidatedReallocationWatchpoint,
-                        locations[0].directGPR(), locations[1].directGPR(),
-                        CCallHelpers::TrustedImmPtr(executable));
-                }
-                return createLazyCallGenerator(
-                    operationNewFunctionWithInvalidatedReallocationWatchpoint,
-                    locations[0].directGPR(), locations[1].directGPR(),
-                    CCallHelpers::TrustedImmPtr(executable));
-            },
-            slowPathArguments);
-        ValueFromBlock slowResult = m_out.anchor(callResult);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
-    }
-    
-    void compileCreateDirectArguments()
-    {
-        // FIXME: A more effective way of dealing with the argument count and callee is to have
-        // them be explicit arguments to this node.
-        // https://bugs.webkit.org/show_bug.cgi?id=142207
-        
-        Structure* structure =
-            m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;directArgumentsStructure();
-        
-        unsigned minCapacity = m_graph.baselineCodeBlockFor(m_node-&gt;origin.semantic)-&gt;numParameters() - 1;
-        
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        ArgumentsLength length = getArgumentsLength();
-        
-        LValue fastObject;
-        if (length.isKnown) {
-            fastObject = allocateObject&lt;DirectArguments&gt;(
-                DirectArguments::allocationSize(std::max(length.known, minCapacity)), structure,
-                m_out.intPtrZero, slowPath);
-        } else {
-            LValue size = m_out.add(
-                m_out.shl(length.value, m_out.constInt32(3)),
-                m_out.constInt32(DirectArguments::storageOffset()));
-            
-            size = m_out.select(
-                m_out.aboveOrEqual(length.value, m_out.constInt32(minCapacity)),
-                size, m_out.constInt32(DirectArguments::allocationSize(minCapacity)));
-            
-            fastObject = allocateVariableSizedObject&lt;DirectArguments&gt;(
-                size, structure, m_out.intPtrZero, slowPath);
-        }
-        
-        m_out.store32(length.value, fastObject, m_heaps.DirectArguments_length);
-        m_out.store32(m_out.constInt32(minCapacity), fastObject, m_heaps.DirectArguments_minCapacity);
-        m_out.storePtr(m_out.intPtrZero, fastObject, m_heaps.DirectArguments_overrides);
-        
-        ValueFromBlock fastResult = m_out.anchor(fastObject);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-        LValue callResult = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationCreateDirectArguments, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
-                    CCallHelpers::TrustedImm32(minCapacity));
-            }, length.value);
-        ValueFromBlock slowResult = m_out.anchor(callResult);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        LValue result = m_out.phi(m_out.intPtr, fastResult, slowResult);
-
-        m_out.storePtr(getCurrentCallee(), result, m_heaps.DirectArguments_callee);
-        
-        if (length.isKnown) {
-            VirtualRegister start = AssemblyHelpers::argumentsStart(m_node-&gt;origin.semantic);
-            for (unsigned i = 0; i &lt; std::max(length.known, minCapacity); ++i) {
-                m_out.store64(
-                    m_out.load64(addressFor(start + i)),
-                    result, m_heaps.DirectArguments_storage[i]);
-            }
-        } else {
-            LValue stackBase = getArgumentsStart();
-            
-            LBasicBlock loop = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments loop body&quot;));
-            LBasicBlock end = FTL_NEW_BLOCK(m_out, (&quot;CreateDirectArguments loop end&quot;));
-
-            ValueFromBlock originalLength;
-            if (minCapacity) {
-                LValue capacity = m_out.select(
-                    m_out.aboveOrEqual(length.value, m_out.constInt32(minCapacity)),
-                    length.value,
-                    m_out.constInt32(minCapacity));
-                LValue originalLengthValue = m_out.zeroExtPtr(capacity);
-                originalLength = m_out.anchor(originalLengthValue);
-                m_out.jump(loop);
-            } else {
-                LValue originalLengthValue = m_out.zeroExtPtr(length.value);
-                originalLength = m_out.anchor(originalLengthValue);
-                m_out.branch(m_out.isNull(originalLengthValue), unsure(end), unsure(loop));
-            }
-            
-            lastNext = m_out.appendTo(loop, end);
-            LValue previousIndex = m_out.phi(m_out.intPtr, originalLength);
-            LValue index = m_out.sub(previousIndex, m_out.intPtrOne);
-            m_out.store64(
-                m_out.load64(m_out.baseIndex(m_heaps.variables, stackBase, index)),
-                m_out.baseIndex(m_heaps.DirectArguments_storage, result, index));
-            ValueFromBlock nextIndex = m_out.anchor(index);
-            m_out.addIncomingToPhi(previousIndex, nextIndex);
-            m_out.branch(m_out.isNull(index), unsure(end), unsure(loop));
-            
-            m_out.appendTo(end, lastNext);
-        }
-        
-        setJSValue(result);
-    }
-    
-    void compileCreateScopedArguments()
-    {
-        LValue scope = lowCell(m_node-&gt;child1());
-        
-        LValue result = vmCall(
-            m_out.int64, m_out.operation(operationCreateScopedArguments), m_callFrame,
-            weakPointer(
-                m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;scopedArgumentsStructure()),
-            getArgumentsStart(), getArgumentsLength().value, getCurrentCallee(), scope);
-        
-        setJSValue(result);
-    }
-    
-    void compileCreateClonedArguments()
-    {
-        LValue result = vmCall(
-            m_out.int64, m_out.operation(operationCreateClonedArguments), m_callFrame,
-            weakPointer(
-                m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;outOfBandArgumentsStructure()),
-            getArgumentsStart(), getArgumentsLength().value, getCurrentCallee());
-        
-        setJSValue(result);
-    }
-
-    void compileCopyRest()
-    {            
-        LBasicBlock doCopyRest = FTL_NEW_BLOCK(m_out, (&quot;CopyRest C call&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;FillRestParameter continuation&quot;));
-
-        LValue arrayLength = lowInt32(m_node-&gt;child2());
-
-        m_out.branch(
-            m_out.equal(arrayLength, m_out.constInt32(0)),
-            unsure(continuation), unsure(doCopyRest));
-            
-        LBasicBlock lastNext = m_out.appendTo(doCopyRest, continuation);
-        // Arguments: 0:exec, 1:JSCell* array, 2:arguments start, 3:number of arguments to skip, 4:array length
-        LValue numberOfArgumentsToSkip = m_out.constInt32(m_node-&gt;numberOfArgumentsToSkip());
-        vmCall(
-            m_out.voidType,m_out.operation(operationCopyRest), m_callFrame, lowCell(m_node-&gt;child1()),
-            getArgumentsStart(), numberOfArgumentsToSkip, arrayLength);
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-    }
-
-    void compileGetRestLength()
-    {
-        LBasicBlock nonZeroLength = FTL_NEW_BLOCK(m_out, (&quot;GetRestLength non zero&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetRestLength continuation&quot;));
-        
-        ValueFromBlock zeroLengthResult = m_out.anchor(m_out.constInt32(0));
-
-        LValue numberOfArgumentsToSkip = m_out.constInt32(m_node-&gt;numberOfArgumentsToSkip());
-        LValue argumentsLength = getArgumentsLength().value;
-        m_out.branch(m_out.above(argumentsLength, numberOfArgumentsToSkip),
-            unsure(nonZeroLength), unsure(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(nonZeroLength, continuation);
-        ValueFromBlock nonZeroLengthResult = m_out.anchor(m_out.sub(argumentsLength, numberOfArgumentsToSkip));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setInt32(m_out.phi(m_out.int32, zeroLengthResult, nonZeroLengthResult));
-    }
-    
-    void compileNewObject()
-    {
-        setJSValue(allocateObject(m_node-&gt;structure()));
-    }
-    
-    void compileNewArray()
-    {
-        // First speculate appropriately on all of the children. Do this unconditionally up here
-        // because some of the slow paths may otherwise forget to do it. It's sort of arguable
-        // that doing the speculations up here might be unprofitable for RA - so we can consider
-        // sinking this to below the allocation fast path if we find that this has a lot of
-        // register pressure.
-        for (unsigned operandIndex = 0; operandIndex &lt; m_node-&gt;numChildren(); ++operandIndex)
-            speculate(m_graph.varArgChild(m_node, operandIndex));
-        
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        Structure* structure = globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(
-            m_node-&gt;indexingType());
-
-        if (!globalObject-&gt;isHavingABadTime() &amp;&amp; !hasAnyArrayStorage(m_node-&gt;indexingType())) {
-            unsigned numElements = m_node-&gt;numChildren();
-            
-            ArrayValues arrayValues = allocateJSArray(structure, numElements);
-            
-            for (unsigned operandIndex = 0; operandIndex &lt; m_node-&gt;numChildren(); ++operandIndex) {
-                Edge edge = m_graph.varArgChild(m_node, operandIndex);
-                
-                switch (m_node-&gt;indexingType()) {
-                case ALL_BLANK_INDEXING_TYPES:
-                case ALL_UNDECIDED_INDEXING_TYPES:
-                    DFG_CRASH(m_graph, m_node, &quot;Bad indexing type&quot;);
-                    break;
-                    
-                case ALL_DOUBLE_INDEXING_TYPES:
-                    m_out.storeDouble(
-                        lowDouble(edge),
-                        arrayValues.butterfly, m_heaps.indexedDoubleProperties[operandIndex]);
-                    break;
-                    
-                case ALL_INT32_INDEXING_TYPES:
-                case ALL_CONTIGUOUS_INDEXING_TYPES:
-                    m_out.store64(
-                        lowJSValue(edge, ManualOperandSpeculation),
-                        arrayValues.butterfly,
-                        m_heaps.forIndexingType(m_node-&gt;indexingType())-&gt;at(operandIndex));
-                    break;
-                    
-                default:
-                    DFG_CRASH(m_graph, m_node, &quot;Corrupt indexing type&quot;);
-                    break;
-                }
-            }
-            
-            setJSValue(arrayValues.array);
-            return;
-        }
-        
-        if (!m_node-&gt;numChildren()) {
-            setJSValue(vmCall(
-                m_out.int64, m_out.operation(operationNewEmptyArray), m_callFrame,
-                m_out.constIntPtr(structure)));
-            return;
-        }
-        
-        size_t scratchSize = sizeof(EncodedJSValue) * m_node-&gt;numChildren();
-        ASSERT(scratchSize);
-        ScratchBuffer* scratchBuffer = vm().scratchBufferForSize(scratchSize);
-        EncodedJSValue* buffer = static_cast&lt;EncodedJSValue*&gt;(scratchBuffer-&gt;dataBuffer());
-        
-        for (unsigned operandIndex = 0; operandIndex &lt; m_node-&gt;numChildren(); ++operandIndex) {
-            Edge edge = m_graph.varArgChild(m_node, operandIndex);
-            m_out.store64(
-                lowJSValue(edge, ManualOperandSpeculation),
-                m_out.absolute(buffer + operandIndex));
-        }
-        
-        m_out.storePtr(
-            m_out.constIntPtr(scratchSize), m_out.absolute(scratchBuffer-&gt;activeLengthPtr()));
-        
-        LValue result = vmCall(
-            m_out.int64, m_out.operation(operationNewArray), m_callFrame,
-            m_out.constIntPtr(structure), m_out.constIntPtr(buffer),
-            m_out.constIntPtr(m_node-&gt;numChildren()));
-        
-        m_out.storePtr(m_out.intPtrZero, m_out.absolute(scratchBuffer-&gt;activeLengthPtr()));
-        
-        setJSValue(result);
-    }
-    
-    void compileNewArrayBuffer()
-    {
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        Structure* structure = globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(
-            m_node-&gt;indexingType());
-        
-        if (!globalObject-&gt;isHavingABadTime() &amp;&amp; !hasAnyArrayStorage(m_node-&gt;indexingType())) {
-            unsigned numElements = m_node-&gt;numConstants();
-            
-            ArrayValues arrayValues = allocateJSArray(structure, numElements);
-            
-            JSValue* data = codeBlock()-&gt;constantBuffer(m_node-&gt;startConstant());
-            for (unsigned index = 0; index &lt; m_node-&gt;numConstants(); ++index) {
-                int64_t value;
-                if (hasDouble(m_node-&gt;indexingType()))
-                    value = bitwise_cast&lt;int64_t&gt;(data[index].asNumber());
-                else
-                    value = JSValue::encode(data[index]);
-                
-                m_out.store64(
-                    m_out.constInt64(value),
-                    arrayValues.butterfly,
-                    m_heaps.forIndexingType(m_node-&gt;indexingType())-&gt;at(index));
-            }
-            
-            setJSValue(arrayValues.array);
-            return;
-        }
-        
-        setJSValue(vmCall(
-            m_out.int64, m_out.operation(operationNewArrayBuffer), m_callFrame,
-            m_out.constIntPtr(structure), m_out.constIntPtr(m_node-&gt;startConstant()),
-            m_out.constIntPtr(m_node-&gt;numConstants())));
-    }
-    
-    void compileNewArrayWithSize()
-    {
-        LValue publicLength = lowInt32(m_node-&gt;child1());
-        
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        Structure* structure = globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(
-            m_node-&gt;indexingType());
-        
-        if (!globalObject-&gt;isHavingABadTime() &amp;&amp; !hasAnyArrayStorage(m_node-&gt;indexingType())) {
-            ASSERT(
-                hasUndecided(structure-&gt;indexingType())
-                || hasInt32(structure-&gt;indexingType())
-                || hasDouble(structure-&gt;indexingType())
-                || hasContiguous(structure-&gt;indexingType()));
-
-            LBasicBlock fastCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize fast case&quot;));
-            LBasicBlock largeCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize large case&quot;));
-            LBasicBlock failCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize fail case&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize continuation&quot;));
-            
-            m_out.branch(
-                m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)),
-                rarely(largeCase), usually(fastCase));
-
-            LBasicBlock lastNext = m_out.appendTo(fastCase, largeCase);
-            
-            // We don't round up to BASE_VECTOR_LEN for new Array(blah).
-            LValue vectorLength = publicLength;
-            
-            LValue payloadSize =
-                m_out.shl(m_out.zeroExt(vectorLength, m_out.intPtr), m_out.constIntPtr(3));
-            
-            LValue butterflySize = m_out.add(
-                payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
-            
-            LValue endOfStorage = allocateBasicStorageAndGetEnd(butterflySize, failCase);
-            
-            LValue butterfly = m_out.sub(endOfStorage, payloadSize);
-            
-            LValue object = allocateObject&lt;JSArray&gt;(structure, butterfly, failCase);
-            
-            m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
-            m_out.store32(vectorLength, butterfly, m_heaps.Butterfly_vectorLength);
-            
-            if (hasDouble(m_node-&gt;indexingType())) {
-                LBasicBlock initLoop = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize double init loop&quot;));
-                LBasicBlock initDone = FTL_NEW_BLOCK(m_out, (&quot;NewArrayWithSize double init done&quot;));
-                
-                ValueFromBlock originalIndex = m_out.anchor(vectorLength);
-                ValueFromBlock originalPointer = m_out.anchor(butterfly);
-                m_out.branch(
-                    m_out.notZero32(vectorLength), unsure(initLoop), unsure(initDone));
-                
-                LBasicBlock initLastNext = m_out.appendTo(initLoop, initDone);
-                LValue index = m_out.phi(m_out.int32, originalIndex);
-                LValue pointer = m_out.phi(m_out.intPtr, originalPointer);
-                
-                m_out.store64(
-                    m_out.constInt64(bitwise_cast&lt;int64_t&gt;(PNaN)),
-                    TypedPointer(m_heaps.indexedDoubleProperties.atAnyIndex(), pointer));
-                
-                LValue nextIndex = m_out.sub(index, m_out.int32One);
-                m_out.addIncomingToPhi(index, m_out.anchor(nextIndex));
-                m_out.addIncomingToPhi(pointer, m_out.anchor(m_out.add(pointer, m_out.intPtrEight)));
-                m_out.branch(
-                    m_out.notZero32(nextIndex), unsure(initLoop), unsure(initDone));
-                
-                m_out.appendTo(initDone, initLastNext);
-            }
-            
-            ValueFromBlock fastResult = m_out.anchor(object);
-            m_out.jump(continuation);
-            
-            m_out.appendTo(largeCase, failCase);
-            ValueFromBlock largeStructure = m_out.anchor(m_out.constIntPtr(
-                globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)));
-            m_out.jump(slowCase);
-            
-            m_out.appendTo(failCase, slowCase);
-            ValueFromBlock failStructure = m_out.anchor(m_out.constIntPtr(structure));
-            m_out.jump(slowCase);
-            
-            m_out.appendTo(slowCase, continuation);
-            LValue structureValue = m_out.phi(
-                m_out.intPtr, largeStructure, failStructure);
-            LValue slowResultValue = lazySlowPath(
-                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                    return createLazyCallGenerator(
-                        operationNewArrayWithSize, locations[0].directGPR(),
-                        locations[1].directGPR(), locations[2].directGPR());
-                },
-                structureValue, publicLength);
-            ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
-            return;
-        }
-        
-        LValue structureValue = m_out.select(
-            m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)),
-            m_out.constIntPtr(
-                globalObject-&gt;arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)),
-            m_out.constIntPtr(structure));
-        setJSValue(vmCall(m_out.int64, m_out.operation(operationNewArrayWithSize), m_callFrame, structureValue, publicLength));
-    }
-
-    void compileNewTypedArray()
-    {
-        TypedArrayType type = m_node-&gt;typedArrayType();
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        
-        switch (m_node-&gt;child1().useKind()) {
-        case Int32Use: {
-            Structure* structure = globalObject-&gt;typedArrayStructure(type);
-
-            LValue size = lowInt32(m_node-&gt;child1());
-
-            LBasicBlock smallEnoughCase = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray small enough case&quot;));
-            LBasicBlock nonZeroCase = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray non-zero case&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NewTypedArray continuation&quot;));
-
-            m_out.branch(
-                m_out.above(size, m_out.constInt32(JSArrayBufferView::fastSizeLimit)),
-                rarely(slowCase), usually(smallEnoughCase));
-
-            LBasicBlock lastNext = m_out.appendTo(smallEnoughCase, nonZeroCase);
-
-            m_out.branch(m_out.notZero32(size), usually(nonZeroCase), rarely(slowCase));
-
-            m_out.appendTo(nonZeroCase, slowCase);
-
-            LValue byteSize =
-                m_out.shl(m_out.zeroExtPtr(size), m_out.constInt32(logElementSize(type)));
-            if (elementSize(type) &lt; 8) {
-                byteSize = m_out.bitAnd(
-                    m_out.add(byteSize, m_out.constIntPtr(7)),
-                    m_out.constIntPtr(~static_cast&lt;intptr_t&gt;(7)));
-            }
-        
-            LValue storage = allocateBasicStorage(byteSize, slowCase);
-
-            LValue fastResultValue =
-                allocateObject&lt;JSArrayBufferView&gt;(structure, m_out.intPtrZero, slowCase);
-
-            m_out.storePtr(storage, fastResultValue, m_heaps.JSArrayBufferView_vector);
-            m_out.store32(size, fastResultValue, m_heaps.JSArrayBufferView_length);
-            m_out.store32(m_out.constInt32(FastTypedArray), fastResultValue, m_heaps.JSArrayBufferView_mode);
-
-            ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-            m_out.jump(continuation);
-
-            m_out.appendTo(slowCase, continuation);
-
-            LValue slowResultValue = lazySlowPath(
-                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                    return createLazyCallGenerator(
-                        operationNewTypedArrayWithSizeForType(type), locations[0].directGPR(),
-                        CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR());
-                },
-                size);
-            ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.intPtr, fastResult, slowResult));
-            return;
-        }
-
-        case UntypedUse: {
-            LValue argument = lowJSValue(m_node-&gt;child1());
-
-            LValue result = vmCall(
-                m_out.intPtr, m_out.operation(operationNewTypedArrayWithOneArgumentForType(type)),
-                m_callFrame, weakPointer(globalObject-&gt;typedArrayStructure(type)), argument);
-
-            setJSValue(result);
-            return;
-        }
-
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            return;
-        }
-    }
-    
-    void compileAllocatePropertyStorage()
-    {
-        LValue object = lowCell(m_node-&gt;child1());
-        setStorage(allocatePropertyStorage(object, m_node-&gt;transition()-&gt;previous));
-    }
-
-    void compileReallocatePropertyStorage()
-    {
-        Transition* transition = m_node-&gt;transition();
-        LValue object = lowCell(m_node-&gt;child1());
-        LValue oldStorage = lowStorage(m_node-&gt;child2());
-        
-        setStorage(
-            reallocatePropertyStorage(
-                object, oldStorage, transition-&gt;previous, transition-&gt;next));
-    }
-    
-    void compileToStringOrCallStringConstructor()
-    {
-        switch (m_node-&gt;child1().useKind()) {
-        case StringObjectUse: {
-            LValue cell = lowCell(m_node-&gt;child1());
-            speculateStringObjectForCell(m_node-&gt;child1(), cell);
-            m_interpreter.filter(m_node-&gt;child1(), SpecStringObject);
-            
-            setJSValue(m_out.loadPtr(cell, m_heaps.JSWrapperObject_internalValue));
-            return;
-        }
-            
-        case StringOrStringObjectUse: {
-            LValue cell = lowCell(m_node-&gt;child1());
-            LValue structureID = m_out.load32(cell, m_heaps.JSCell_structureID);
-            
-            LBasicBlock notString = FTL_NEW_BLOCK(m_out, (&quot;ToString StringOrStringObject not string case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToString StringOrStringObject continuation&quot;));
-            
-            ValueFromBlock simpleResult = m_out.anchor(cell);
-            m_out.branch(
-                m_out.equal(structureID, m_out.constInt32(vm().stringStructure-&gt;id())),
-                unsure(continuation), unsure(notString));
-            
-            LBasicBlock lastNext = m_out.appendTo(notString, continuation);
-            speculateStringObjectForStructureID(m_node-&gt;child1(), structureID);
-            ValueFromBlock unboxedResult = m_out.anchor(
-                m_out.loadPtr(cell, m_heaps.JSWrapperObject_internalValue));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, simpleResult, unboxedResult));
-            
-            m_interpreter.filter(m_node-&gt;child1(), SpecString | SpecStringObject);
-            return;
-        }
-            
-        case CellUse:
-        case UntypedUse: {
-            LValue value;
-            if (m_node-&gt;child1().useKind() == CellUse)
-                value = lowCell(m_node-&gt;child1());
-            else
-                value = lowJSValue(m_node-&gt;child1());
-            
-            LBasicBlock isCell = FTL_NEW_BLOCK(m_out, (&quot;ToString CellUse/UntypedUse is cell&quot;));
-            LBasicBlock notString = FTL_NEW_BLOCK(m_out, (&quot;ToString CellUse/UntypedUse not string&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToString CellUse/UntypedUse continuation&quot;));
-            
-            LValue isCellPredicate;
-            if (m_node-&gt;child1().useKind() == CellUse)
-                isCellPredicate = m_out.booleanTrue;
-            else
-                isCellPredicate = this-&gt;isCell(value, provenType(m_node-&gt;child1()));
-            m_out.branch(isCellPredicate, unsure(isCell), unsure(notString));
-            
-            LBasicBlock lastNext = m_out.appendTo(isCell, notString);
-            ValueFromBlock simpleResult = m_out.anchor(value);
-            LValue isStringPredicate;
-            if (m_node-&gt;child1()-&gt;prediction() &amp; SpecString) {
-                isStringPredicate = isString(value, provenType(m_node-&gt;child1()));
-            } else
-                isStringPredicate = m_out.booleanFalse;
-            m_out.branch(isStringPredicate, unsure(continuation), unsure(notString));
-            
-            m_out.appendTo(notString, continuation);
-            LValue operation;
-            if (m_node-&gt;child1().useKind() == CellUse)
-                operation = m_out.operation(m_node-&gt;op() == ToString ? operationToStringOnCell : operationCallStringConstructorOnCell);
-            else
-                operation = m_out.operation(m_node-&gt;op() == ToString ? operationToString : operationCallStringConstructor);
-            ValueFromBlock convertedResult = m_out.anchor(vmCall(m_out.int64, operation, m_callFrame, value));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setJSValue(m_out.phi(m_out.int64, simpleResult, convertedResult));
-            return;
-        }
-            
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            break;
-        }
-    }
-    
-    void compileToPrimitive()
-    {
-        LValue value = lowJSValue(m_node-&gt;child1());
-        
-        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;ToPrimitive cell case&quot;));
-        LBasicBlock isObjectCase = FTL_NEW_BLOCK(m_out, (&quot;ToPrimitive object case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ToPrimitive continuation&quot;));
-        
-        Vector&lt;ValueFromBlock, 3&gt; results;
-        
-        results.append(m_out.anchor(value));
-        m_out.branch(
-            isCell(value, provenType(m_node-&gt;child1())), unsure(isCellCase), unsure(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(isCellCase, isObjectCase);
-        results.append(m_out.anchor(value));
-        m_out.branch(
-            isObject(value, provenType(m_node-&gt;child1())),
-            unsure(isObjectCase), unsure(continuation));
-        
-        m_out.appendTo(isObjectCase, continuation);
-        results.append(m_out.anchor(vmCall(
-            m_out.int64, m_out.operation(operationToPrimitive), m_callFrame, value)));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, results));
-    }
-    
-    void compileMakeRope()
-    {
-        LValue kids[3];
-        unsigned numKids;
-        kids[0] = lowCell(m_node-&gt;child1());
-        kids[1] = lowCell(m_node-&gt;child2());
-        if (m_node-&gt;child3()) {
-            kids[2] = lowCell(m_node-&gt;child3());
-            numKids = 3;
-        } else {
-            kids[2] = 0;
-            numKids = 2;
-        }
-        
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;MakeRope slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MakeRope continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        MarkedAllocator&amp; allocator =
-            vm().heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
-        
-        LValue result = allocateCell(
-            m_out.constIntPtr(&amp;allocator),
-            vm().stringStructure.get(),
-            slowPath);
-        
-        m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value);
-        for (unsigned i = 0; i &lt; numKids; ++i)
-            m_out.storePtr(kids[i], result, m_heaps.JSRopeString_fibers[i]);
-        for (unsigned i = numKids; i &lt; JSRopeString::s_maxInternalRopeLength; ++i)
-            m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSRopeString_fibers[i]);
-        LValue flags = m_out.load32(kids[0], m_heaps.JSString_flags);
-        LValue length = m_out.load32(kids[0], m_heaps.JSString_length);
-        for (unsigned i = 1; i &lt; numKids; ++i) {
-            flags = m_out.bitAnd(flags, m_out.load32(kids[i], m_heaps.JSString_flags));
-            CheckValue* lengthCheck = m_out.speculateAdd(
-                length, m_out.load32(kids[i], m_heaps.JSString_length));
-            blessSpeculation(lengthCheck, Uncountable, noValue(), nullptr, m_origin);
-            length = lengthCheck;
-        }
-        m_out.store32(
-            m_out.bitAnd(m_out.constInt32(JSString::Is8Bit), flags),
-            result, m_heaps.JSString_flags);
-        m_out.store32(length, result, m_heaps.JSString_length);
-        
-        ValueFromBlock fastResult = m_out.anchor(result);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-        LValue slowResultValue;
-        switch (numKids) {
-        case 2:
-            slowResultValue = lazySlowPath(
-                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                    return createLazyCallGenerator(
-                        operationMakeRope2, locations[0].directGPR(), locations[1].directGPR(),
-                        locations[2].directGPR());
-                }, kids[0], kids[1]);
-            break;
-        case 3:
-            slowResultValue = lazySlowPath(
-                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                    return createLazyCallGenerator(
-                        operationMakeRope3, locations[0].directGPR(), locations[1].directGPR(),
-                        locations[2].directGPR(), locations[3].directGPR());
-                }, kids[0], kids[1], kids[2]);
-            break;
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad number of children&quot;);
-            break;
-        }
-        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
-    }
-    
-    void compileStringCharAt()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue index = lowInt32(m_node-&gt;child2());
-        LValue storage = lowStorage(m_node-&gt;child3());
-            
-        LBasicBlock fastPath = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String fast path&quot;));
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String continuation&quot;));
-            
-        m_out.branch(
-            m_out.aboveOrEqual(
-                index, m_out.load32NonNegative(base, m_heaps.JSString_length)),
-            rarely(slowPath), usually(fastPath));
-            
-        LBasicBlock lastNext = m_out.appendTo(fastPath, slowPath);
-            
-        LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value);
-            
-        LBasicBlock is8Bit = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String 8-bit case&quot;));
-        LBasicBlock is16Bit = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String 16-bit case&quot;));
-        LBasicBlock bitsContinuation = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String bitness continuation&quot;));
-        LBasicBlock bigCharacter = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String big character&quot;));
-            
-        m_out.branch(
-            m_out.testIsZero32(
-                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
-                m_out.constInt32(StringImpl::flagIs8Bit())),
-            unsure(is16Bit), unsure(is8Bit));
-            
-        m_out.appendTo(is8Bit, is16Bit);
-            
-        ValueFromBlock char8Bit = m_out.anchor(
-            m_out.load8ZeroExt32(m_out.baseIndex(
-                m_heaps.characters8, storage, m_out.zeroExtPtr(index),
-                provenValue(m_node-&gt;child2()))));
-        m_out.jump(bitsContinuation);
-            
-        m_out.appendTo(is16Bit, bigCharacter);
-
-        LValue char16BitValue = m_out.load16ZeroExt32(
-            m_out.baseIndex(
-                m_heaps.characters16, storage, m_out.zeroExtPtr(index),
-                provenValue(m_node-&gt;child2())));
-        ValueFromBlock char16Bit = m_out.anchor(char16BitValue);
-        m_out.branch(
-            m_out.aboveOrEqual(char16BitValue, m_out.constInt32(0x100)),
-            rarely(bigCharacter), usually(bitsContinuation));
-            
-        m_out.appendTo(bigCharacter, bitsContinuation);
-            
-        Vector&lt;ValueFromBlock, 4&gt; results;
-        results.append(m_out.anchor(vmCall(
-            m_out.int64, m_out.operation(operationSingleCharacterString),
-            m_callFrame, char16BitValue)));
-        m_out.jump(continuation);
-            
-        m_out.appendTo(bitsContinuation, slowPath);
-            
-        LValue character = m_out.phi(m_out.int32, char8Bit, char16Bit);
-            
-        LValue smallStrings = m_out.constIntPtr(vm().smallStrings.singleCharacterStrings());
-            
-        results.append(m_out.anchor(m_out.loadPtr(m_out.baseIndex(
-            m_heaps.singleCharacterStrings, smallStrings, m_out.zeroExtPtr(character)))));
-        m_out.jump(continuation);
-            
-        m_out.appendTo(slowPath, continuation);
-            
-        if (m_node-&gt;arrayMode().isInBounds()) {
-            speculate(OutOfBounds, noValue(), 0, m_out.booleanTrue);
-            results.append(m_out.anchor(m_out.intPtrZero));
-        } else {
-            JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-                
-            if (globalObject-&gt;stringPrototypeChainIsSane()) {
-                // FIXME: This could be captured using a Speculation mode that means
-                // &quot;out-of-bounds loads return a trivial value&quot;, something like
-                // SaneChainOutOfBounds.
-                // https://bugs.webkit.org/show_bug.cgi?id=144668
-                
-                m_graph.watchpoints().addLazily(globalObject-&gt;stringPrototype()-&gt;structure()-&gt;transitionWatchpointSet());
-                m_graph.watchpoints().addLazily(globalObject-&gt;objectPrototype()-&gt;structure()-&gt;transitionWatchpointSet());
-                
-                LBasicBlock negativeIndex = FTL_NEW_BLOCK(m_out, (&quot;GetByVal String negative index&quot;));
-                    
-                results.append(m_out.anchor(m_out.constInt64(JSValue::encode(jsUndefined()))));
-                m_out.branch(
-                    m_out.lessThan(index, m_out.int32Zero),
-                    rarely(negativeIndex), usually(continuation));
-                    
-                m_out.appendTo(negativeIndex, continuation);
-            }
-                
-            results.append(m_out.anchor(vmCall(
-                m_out.int64, m_out.operation(operationGetByValStringInt), m_callFrame, base, index)));
-        }
-            
-        m_out.jump(continuation);
-            
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, results));
-    }
-    
-    void compileStringCharCodeAt()
-    {
-        LBasicBlock is8Bit = FTL_NEW_BLOCK(m_out, (&quot;StringCharCodeAt 8-bit case&quot;));
-        LBasicBlock is16Bit = FTL_NEW_BLOCK(m_out, (&quot;StringCharCodeAt 16-bit case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;StringCharCodeAt continuation&quot;));
-
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue index = lowInt32(m_node-&gt;child2());
-        LValue storage = lowStorage(m_node-&gt;child3());
-        
-        speculate(
-            Uncountable, noValue(), 0,
-            m_out.aboveOrEqual(
-                index, m_out.load32NonNegative(base, m_heaps.JSString_length)));
-        
-        LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value);
-        
-        m_out.branch(
-            m_out.testIsZero32(
-                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
-                m_out.constInt32(StringImpl::flagIs8Bit())),
-            unsure(is16Bit), unsure(is8Bit));
-            
-        LBasicBlock lastNext = m_out.appendTo(is8Bit, is16Bit);
-            
-        ValueFromBlock char8Bit = m_out.anchor(
-            m_out.load8ZeroExt32(m_out.baseIndex(
-                m_heaps.characters8, storage, m_out.zeroExtPtr(index),
-                provenValue(m_node-&gt;child2()))));
-        m_out.jump(continuation);
-            
-        m_out.appendTo(is16Bit, continuation);
-            
-        ValueFromBlock char16Bit = m_out.anchor(
-            m_out.load16ZeroExt32(m_out.baseIndex(
-                m_heaps.characters16, storage, m_out.zeroExtPtr(index),
-                provenValue(m_node-&gt;child2()))));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        
-        setInt32(m_out.phi(m_out.int32, char8Bit, char16Bit));
-    }
-
-    void compileStringFromCharCode()
-    {
-        Edge childEdge = m_node-&gt;child1();
-        
-        if (childEdge.useKind() == UntypedUse) {
-            LValue result = vmCall(
-                m_out.int64, m_out.operation(operationStringFromCharCodeUntyped), m_callFrame,
-                lowJSValue(childEdge));
-            setJSValue(result);
-            return;
-        }
-
-        DFG_ASSERT(m_graph, m_node, childEdge.useKind() == Int32Use);
-
-        LValue value = lowInt32(childEdge);
-        
-        LBasicBlock smallIntCase = FTL_NEW_BLOCK(m_out, (&quot;StringFromCharCode small int case&quot;));
-        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;StringFromCharCode slow case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;StringFromCharCode continuation&quot;));
-
-        m_out.branch(
-            m_out.aboveOrEqual(value, m_out.constInt32(0xff)),
-            rarely(slowCase), usually(smallIntCase));
-
-        LBasicBlock lastNext = m_out.appendTo(smallIntCase, slowCase);
-
-        LValue smallStrings = m_out.constIntPtr(vm().smallStrings.singleCharacterStrings());
-        LValue fastResultValue = m_out.loadPtr(
-            m_out.baseIndex(m_heaps.singleCharacterStrings, smallStrings, m_out.zeroExtPtr(value)));
-        ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-        m_out.jump(continuation);
-
-        m_out.appendTo(slowCase, continuation);
-
-        LValue slowResultValue = vmCall(
-            m_out.intPtr, m_out.operation(operationStringFromCharCode), m_callFrame, value);
-        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-
-        setJSValue(m_out.phi(m_out.int64, fastResult, slowResult));
-    }
-    
-    void compileGetByOffset()
-    {
-        StorageAccessData&amp; data = m_node-&gt;storageAccessData();
-        
-        setJSValue(loadProperty(
-            lowStorage(m_node-&gt;child1()), data.identifierNumber, data.offset));
-    }
-    
-    void compileGetGetter()
-    {
-        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.GetterSetter_getter));
-    }
-    
-    void compileGetSetter()
-    {
-        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.GetterSetter_setter));
-    }
-    
-    void compileMultiGetByOffset()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        
-        MultiGetByOffsetData&amp; data = m_node-&gt;multiGetByOffsetData();
-
-        if (data.cases.isEmpty()) {
-            // Protect against creating a Phi function with zero inputs. LLVM doesn't like that.
-            terminate(BadCache);
-            return;
-        }
-        
-        Vector&lt;LBasicBlock, 2&gt; blocks(data.cases.size());
-        for (unsigned i = data.cases.size(); i--;)
-            blocks[i] = FTL_NEW_BLOCK(m_out, (&quot;MultiGetByOffset case &quot;, i));
-        LBasicBlock exit = FTL_NEW_BLOCK(m_out, (&quot;MultiGetByOffset fail&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MultiGetByOffset continuation&quot;));
-        
-        Vector&lt;SwitchCase, 2&gt; cases;
-        StructureSet baseSet;
-        for (unsigned i = data.cases.size(); i--;) {
-            MultiGetByOffsetCase getCase = data.cases[i];
-            for (unsigned j = getCase.set().size(); j--;) {
-                Structure* structure = getCase.set()[j];
-                baseSet.add(structure);
-                cases.append(SwitchCase(weakStructureID(structure), blocks[i], Weight(1)));
-            }
-        }
-        m_out.switchInstruction(
-            m_out.load32(base, m_heaps.JSCell_structureID), cases, exit, Weight(0));
-        
-        LBasicBlock lastNext = m_out.m_nextBlock;
-        
-        Vector&lt;ValueFromBlock, 2&gt; results;
-        for (unsigned i = data.cases.size(); i--;) {
-            MultiGetByOffsetCase getCase = data.cases[i];
-            GetByOffsetMethod method = getCase.method();
-            
-            m_out.appendTo(blocks[i], i + 1 &lt; data.cases.size() ? blocks[i + 1] : exit);
-            
-            LValue result;
-            
-            switch (method.kind()) {
-            case GetByOffsetMethod::Invalid:
-                RELEASE_ASSERT_NOT_REACHED();
-                break;
-                
-            case GetByOffsetMethod::Constant:
-                result = m_out.constInt64(JSValue::encode(method.constant()-&gt;value()));
-                break;
-                
-            case GetByOffsetMethod::Load:
-            case GetByOffsetMethod::LoadFromPrototype: {
-                LValue propertyBase;
-                if (method.kind() == GetByOffsetMethod::Load)
-                    propertyBase = base;
-                else
-                    propertyBase = weakPointer(method.prototype()-&gt;value().asCell());
-                if (!isInlineOffset(method.offset()))
-                    propertyBase = loadButterflyReadOnly(propertyBase);
-                result = loadProperty(
-                    propertyBase, data.identifierNumber, method.offset());
-                break;
-            } }
-            
-            results.append(m_out.anchor(result));
-            m_out.jump(continuation);
-        }
-        
-        m_out.appendTo(exit, continuation);
-        if (!m_interpreter.forNode(m_node-&gt;child1()).m_structure.isSubsetOf(baseSet))
-            speculate(BadCache, noValue(), nullptr, m_out.booleanTrue);
-        m_out.unreachable();
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, results));
-    }
-    
-    void compilePutByOffset()
-    {
-        StorageAccessData&amp; data = m_node-&gt;storageAccessData();
-        
-        storeProperty(
-            lowJSValue(m_node-&gt;child3()),
-            lowStorage(m_node-&gt;child1()), data.identifierNumber, data.offset);
-    }
-    
-    void compileMultiPutByOffset()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        LValue value = lowJSValue(m_node-&gt;child2());
-        
-        MultiPutByOffsetData&amp; data = m_node-&gt;multiPutByOffsetData();
-        
-        Vector&lt;LBasicBlock, 2&gt; blocks(data.variants.size());
-        for (unsigned i = data.variants.size(); i--;)
-            blocks[i] = FTL_NEW_BLOCK(m_out, (&quot;MultiPutByOffset case &quot;, i));
-        LBasicBlock exit = FTL_NEW_BLOCK(m_out, (&quot;MultiPutByOffset fail&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MultiPutByOffset continuation&quot;));
-        
-        Vector&lt;SwitchCase, 2&gt; cases;
-        StructureSet baseSet;
-        for (unsigned i = data.variants.size(); i--;) {
-            PutByIdVariant variant = data.variants[i];
-            for (unsigned j = variant.oldStructure().size(); j--;) {
-                Structure* structure = variant.oldStructure()[j];
-                baseSet.add(structure);
-                cases.append(SwitchCase(weakStructureID(structure), blocks[i], Weight(1)));
-            }
-        }
-        m_out.switchInstruction(
-            m_out.load32(base, m_heaps.JSCell_structureID), cases, exit, Weight(0));
-        
-        LBasicBlock lastNext = m_out.m_nextBlock;
-        
-        for (unsigned i = data.variants.size(); i--;) {
-            m_out.appendTo(blocks[i], i + 1 &lt; data.variants.size() ? blocks[i + 1] : exit);
-            
-            PutByIdVariant variant = data.variants[i];
-
-            checkInferredType(m_node-&gt;child2(), value, variant.requiredType());
-            
-            LValue storage;
-            if (variant.kind() == PutByIdVariant::Replace) {
-                if (isInlineOffset(variant.offset()))
-                    storage = base;
-                else
-                    storage = loadButterflyWithBarrier(base);
-            } else {
-                m_graph.m_plan.transitions.addLazily(
-                    codeBlock(), m_node-&gt;origin.semantic.codeOriginOwner(),
-                    variant.oldStructureForTransition(), variant.newStructure());
-                
-                storage = storageForTransition(
-                    base, variant.offset(),
-                    variant.oldStructureForTransition(), variant.newStructure());
-
-                ASSERT(variant.oldStructureForTransition()-&gt;indexingType() == variant.newStructure()-&gt;indexingType());
-                ASSERT(variant.oldStructureForTransition()-&gt;typeInfo().inlineTypeFlags() == variant.newStructure()-&gt;typeInfo().inlineTypeFlags());
-                ASSERT(variant.oldStructureForTransition()-&gt;typeInfo().type() == variant.newStructure()-&gt;typeInfo().type());
-                m_out.store32(
-                    weakStructureID(variant.newStructure()), base, m_heaps.JSCell_structureID);
-            }
-            
-            storeProperty(value, storage, data.identifierNumber, variant.offset());
-            m_out.jump(continuation);
-        }
-        
-        m_out.appendTo(exit, continuation);
-        if (!m_interpreter.forNode(m_node-&gt;child1()).m_structure.isSubsetOf(baseSet))
-            speculate(BadCache, noValue(), nullptr, m_out.booleanTrue);
-        m_out.unreachable();
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void compileGetGlobalVariable()
-    {
-        setJSValue(m_out.load64(m_out.absolute(m_node-&gt;variablePointer())));
-    }
-    
-    void compilePutGlobalVariable()
-    {
-        m_out.store64(
-            lowJSValue(m_node-&gt;child2()), m_out.absolute(m_node-&gt;variablePointer()));
-    }
-    
-    void compileNotifyWrite()
-    {
-        WatchpointSet* set = m_node-&gt;watchpointSet();
-        
-        LBasicBlock isNotInvalidated = FTL_NEW_BLOCK(m_out, (&quot;NotifyWrite not invalidated case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;NotifyWrite continuation&quot;));
-        
-        LValue state = m_out.load8ZeroExt32(m_out.absolute(set-&gt;addressOfState()));
-        m_out.branch(
-            m_out.equal(state, m_out.constInt32(IsInvalidated)),
-            usually(continuation), rarely(isNotInvalidated));
-        
-        LBasicBlock lastNext = m_out.appendTo(isNotInvalidated, continuation);
-
-        lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp;) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationNotifyWrite, InvalidGPRReg, CCallHelpers::TrustedImmPtr(set));
-            });
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void compileGetCallee()
-    {
-        setJSValue(m_out.loadPtr(addressFor(JSStack::Callee)));
-    }
-    
-    void compileGetArgumentCount()
-    {
-        setInt32(m_out.load32(payloadFor(JSStack::ArgumentCount)));
-    }
-    
-    void compileGetScope()
-    {
-        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.JSFunction_scope));
-    }
-    
-    void compileSkipScope()
-    {
-        setJSValue(m_out.loadPtr(lowCell(m_node-&gt;child1()), m_heaps.JSScope_next));
-    }
-    
-    void compileGetClosureVar()
-    {
-        setJSValue(
-            m_out.load64(
-                lowCell(m_node-&gt;child1()),
-                m_heaps.JSEnvironmentRecord_variables[m_node-&gt;scopeOffset().offset()]));
-    }
-    
-    void compilePutClosureVar()
-    {
-        m_out.store64(
-            lowJSValue(m_node-&gt;child2()),
-            lowCell(m_node-&gt;child1()),
-            m_heaps.JSEnvironmentRecord_variables[m_node-&gt;scopeOffset().offset()]);
-    }
-    
-    void compileGetFromArguments()
-    {
-        setJSValue(
-            m_out.load64(
-                lowCell(m_node-&gt;child1()),
-                m_heaps.DirectArguments_storage[m_node-&gt;capturedArgumentsOffset().offset()]));
-    }
-    
-    void compilePutToArguments()
-    {
-        m_out.store64(
-            lowJSValue(m_node-&gt;child2()),
-            lowCell(m_node-&gt;child1()),
-            m_heaps.DirectArguments_storage[m_node-&gt;capturedArgumentsOffset().offset()]);
-    }
-    
-    void compileCompareEq()
-    {
-        if (m_node-&gt;isBinaryUseKind(Int32Use)
-            || m_node-&gt;isBinaryUseKind(Int52RepUse)
-            || m_node-&gt;isBinaryUseKind(DoubleRepUse)
-            || m_node-&gt;isBinaryUseKind(ObjectUse)
-            || m_node-&gt;isBinaryUseKind(BooleanUse)
-            || m_node-&gt;isBinaryUseKind(SymbolUse)
-            || m_node-&gt;isBinaryUseKind(StringIdentUse)
-            || m_node-&gt;isBinaryUseKind(StringUse)) {
-            compileCompareStrictEq();
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(ObjectUse, ObjectOrOtherUse)) {
-            compareEqObjectOrOtherToObject(m_node-&gt;child2(), m_node-&gt;child1());
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(ObjectOrOtherUse, ObjectUse)) {
-            compareEqObjectOrOtherToObject(m_node-&gt;child1(), m_node-&gt;child2());
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            nonSpeculativeCompare(
-                [&amp;] (LValue left, LValue right) {
-                    return m_out.equal(left, right);
-                },
-                operationCompareEq);
-            return;
-        }
-
-        if (m_node-&gt;child1().useKind() == OtherUse) {
-            ASSERT(!m_interpreter.needsTypeCheck(m_node-&gt;child1(), SpecOther));
-            setBoolean(equalNullOrUndefined(m_node-&gt;child2(), AllCellsAreFalse, EqualNullOrUndefined, ManualOperandSpeculation));
-            return;
-        }
-
-        if (m_node-&gt;child2().useKind() == OtherUse) {
-            ASSERT(!m_interpreter.needsTypeCheck(m_node-&gt;child2(), SpecOther));
-            setBoolean(equalNullOrUndefined(m_node-&gt;child1(), AllCellsAreFalse, EqualNullOrUndefined, ManualOperandSpeculation));
-            return;
-        }
-
-        DFG_CRASH(m_graph, m_node, &quot;Bad use kinds&quot;);
-    }
-    
-    void compileCompareStrictEq()
-    {
-        if (m_node-&gt;isBinaryUseKind(Int32Use)) {
-            setBoolean(
-                m_out.equal(lowInt32(m_node-&gt;child1()), lowInt32(m_node-&gt;child2())));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(Int52RepUse)) {
-            Int52Kind kind;
-            LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
-            LValue right = lowInt52(m_node-&gt;child2(), kind);
-            setBoolean(m_out.equal(left, right));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(DoubleRepUse)) {
-            setBoolean(
-                m_out.doubleEqual(lowDouble(m_node-&gt;child1()), lowDouble(m_node-&gt;child2())));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(StringIdentUse)) {
-            setBoolean(
-                m_out.equal(lowStringIdent(m_node-&gt;child1()), lowStringIdent(m_node-&gt;child2())));
-            return;
-        }
-
-        if (m_node-&gt;isBinaryUseKind(StringUse)) {
-            LValue left = lowCell(m_node-&gt;child1());
-            LValue right = lowCell(m_node-&gt;child2());
-
-            LBasicBlock notTriviallyEqualCase = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq/String not trivially equal case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq/String continuation&quot;));
-
-            speculateString(m_node-&gt;child1(), left);
-
-            ValueFromBlock fastResult = m_out.anchor(m_out.booleanTrue);
-            m_out.branch(
-                m_out.equal(left, right), unsure(continuation), unsure(notTriviallyEqualCase));
-
-            LBasicBlock lastNext = m_out.appendTo(notTriviallyEqualCase, continuation);
-
-            speculateString(m_node-&gt;child2(), right);
-            
-            ValueFromBlock slowResult = m_out.anchor(stringsEqual(left, right));
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            setBoolean(m_out.phi(m_out.boolean, fastResult, slowResult));
-            return;
-        }
-
-        if (m_node-&gt;isBinaryUseKind(ObjectUse, UntypedUse)) {
-            setBoolean(
-                m_out.equal(
-                    lowNonNullObject(m_node-&gt;child1()),
-                    lowJSValue(m_node-&gt;child2())));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(UntypedUse, ObjectUse)) {
-            setBoolean(
-                m_out.equal(
-                    lowNonNullObject(m_node-&gt;child2()),
-                    lowJSValue(m_node-&gt;child1())));
-            return;
-        }
-
-        if (m_node-&gt;isBinaryUseKind(ObjectUse)) {
-            setBoolean(
-                m_out.equal(
-                    lowNonNullObject(m_node-&gt;child1()),
-                    lowNonNullObject(m_node-&gt;child2())));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(BooleanUse)) {
-            setBoolean(
-                m_out.equal(lowBoolean(m_node-&gt;child1()), lowBoolean(m_node-&gt;child2())));
-            return;
-        }
-
-        if (m_node-&gt;isBinaryUseKind(SymbolUse)) {
-            LValue left = lowSymbol(m_node-&gt;child1());
-            LValue right = lowSymbol(m_node-&gt;child2());
-            LValue leftStringImpl = m_out.loadPtr(left, m_heaps.Symbol_privateName);
-            LValue rightStringImpl = m_out.loadPtr(right, m_heaps.Symbol_privateName);
-            setBoolean(m_out.equal(leftStringImpl, rightStringImpl));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(MiscUse, UntypedUse)
-            || m_node-&gt;isBinaryUseKind(UntypedUse, MiscUse)) {
-            speculate(m_node-&gt;child1());
-            speculate(m_node-&gt;child2());
-            LValue left = lowJSValue(m_node-&gt;child1(), ManualOperandSpeculation);
-            LValue right = lowJSValue(m_node-&gt;child2(), ManualOperandSpeculation);
-            setBoolean(m_out.equal(left, right));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(StringIdentUse, NotStringVarUse)
-            || m_node-&gt;isBinaryUseKind(NotStringVarUse, StringIdentUse)) {
-            Edge leftEdge = m_node-&gt;childFor(StringIdentUse);
-            Edge rightEdge = m_node-&gt;childFor(NotStringVarUse);
-            
-            LValue left = lowStringIdent(leftEdge);
-            LValue rightValue = lowJSValue(rightEdge, ManualOperandSpeculation);
-            
-            LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq StringIdent to NotStringVar is cell case&quot;));
-            LBasicBlock isStringCase = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq StringIdent to NotStringVar is string case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareStrictEq StringIdent to NotStringVar continuation&quot;));
-            
-            ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
-            m_out.branch(
-                isCell(rightValue, provenType(rightEdge)),
-                unsure(isCellCase), unsure(continuation));
-            
-            LBasicBlock lastNext = m_out.appendTo(isCellCase, isStringCase);
-            ValueFromBlock notStringResult = m_out.anchor(m_out.booleanFalse);
-            m_out.branch(
-                isString(rightValue, provenType(rightEdge)),
-                unsure(isStringCase), unsure(continuation));
-            
-            m_out.appendTo(isStringCase, continuation);
-            LValue right = m_out.loadPtr(rightValue, m_heaps.JSString_value);
-            speculateStringIdent(rightEdge, rightValue, right);
-            ValueFromBlock isStringResult = m_out.anchor(m_out.equal(left, right));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setBoolean(m_out.phi(m_out.boolean, notCellResult, notStringResult, isStringResult));
-            return;
-        }
-        
-        DFG_CRASH(m_graph, m_node, &quot;Bad use kinds&quot;);
-    }
-    
-    void compileCompareStrictEqConstant()
-    {
-        JSValue constant = m_node-&gt;child2()-&gt;asJSValue();
-
-        setBoolean(
-            m_out.equal(
-                lowJSValue(m_node-&gt;child1()),
-                m_out.constInt64(JSValue::encode(constant))));
-    }
-    
-    void compileCompareLess()
-    {
-        compare(
-            [&amp;] (LValue left, LValue right) {
-                return m_out.lessThan(left, right);
-            },
-            [&amp;] (LValue left, LValue right) {
-                return m_out.doubleLessThan(left, right);
-            },
-            operationCompareLess);
-    }
-    
-    void compileCompareLessEq()
-    {
-        compare(
-            [&amp;] (LValue left, LValue right) {
-                return m_out.lessThanOrEqual(left, right);
-            },
-            [&amp;] (LValue left, LValue right) {
-                return m_out.doubleLessThanOrEqual(left, right);
-            },
-            operationCompareLessEq);
-    }
-    
-    void compileCompareGreater()
-    {
-        compare(
-            [&amp;] (LValue left, LValue right) {
-                return m_out.greaterThan(left, right);
-            },
-            [&amp;] (LValue left, LValue right) {
-                return m_out.doubleGreaterThan(left, right);
-            },
-            operationCompareGreater);
-    }
-    
-    void compileCompareGreaterEq()
-    {
-        compare(
-            [&amp;] (LValue left, LValue right) {
-                return m_out.greaterThanOrEqual(left, right);
-            },
-            [&amp;] (LValue left, LValue right) {
-                return m_out.doubleGreaterThanOrEqual(left, right);
-            },
-            operationCompareGreaterEq);
-    }
-    
-    void compileLogicalNot()
-    {
-        setBoolean(m_out.logicalNot(boolify(m_node-&gt;child1())));
-    }
-
-    void compileCallOrConstruct()
-    {
-        Node* node = m_node;
-        unsigned numArgs = node-&gt;numChildren() - 1;
-
-        LValue jsCallee = lowJSValue(m_graph.varArgChild(node, 0));
-
-        unsigned frameSize = JSStack::CallFrameHeaderSize + numArgs;
-        unsigned alignedFrameSize = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), frameSize);
-
-        // JS-&gt;JS calling convention requires that the caller allows this much space on top of stack to
-        // get trashed by the callee, even if not all of that space is used to pass arguments. We tell
-        // B3 this explicitly for two reasons:
-        //
-        // - We will only pass frameSize worth of stuff.
-        // - The trashed stack guarantee is logically separate from the act of passing arguments, so we
-        //   shouldn't rely on Air to infer the trashed stack property based on the arguments it ends
-        //   up seeing.
-        m_proc.requestCallArgAreaSize(alignedFrameSize);
-
-        // Collect the arguments, since this can generate code and we want to generate it before we emit
-        // the call.
-        Vector&lt;ConstrainedValue&gt; arguments;
-
-        // Make sure that the callee goes into GPR0 because that's where the slow path thunks expect the
-        // callee to be.
-        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
-
-        auto addArgument = [&amp;] (LValue value, VirtualRegister reg, int offset) {
-            intptr_t offsetFromSP =
-                (reg.offset() - JSStack::CallerFrameAndPCSize) * sizeof(EncodedJSValue) + offset;
-            arguments.append(ConstrainedValue(value, ValueRep::stackArgument(offsetFromSP)));
-        };
-
-        addArgument(jsCallee, VirtualRegister(JSStack::Callee), 0);
-        addArgument(m_out.constInt32(numArgs), VirtualRegister(JSStack::ArgumentCount), PayloadOffset);
-        for (unsigned i = 0; i &lt; numArgs; ++i)
-            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
-
-        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-        patchpoint-&gt;appendVector(arguments);
-
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-        
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-        patchpoint-&gt;clobberLate(RegisterSet::volatileRegistersForJSCall());
-        patchpoint-&gt;resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
-
-        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-                CallSiteIndex callSiteIndex = state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(codeOrigin);
-
-                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
-
-                jit.store32(
-                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
-                    CCallHelpers::tagFor(VirtualRegister(JSStack::ArgumentCount)));
-
-                CallLinkInfo* callLinkInfo = jit.codeBlock()-&gt;addCallLinkInfo();
-
-                CCallHelpers::DataLabelPtr targetToCheck;
-                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
-                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
-                    CCallHelpers::TrustedImmPtr(0));
-
-                CCallHelpers::Call fastCall = jit.nearCall();
-                CCallHelpers::Jump done = jit.jump();
-
-                slowPath.link(&amp;jit);
-
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
-                CCallHelpers::Call slowCall = jit.nearCall();
-                done.link(&amp;jit);
-
-                callLinkInfo-&gt;setUpCall(
-                    node-&gt;op() == Construct ? CallLinkInfo::Construct : CallLinkInfo::Call,
-                    node-&gt;origin.semantic, GPRInfo::regT0);
-
-                jit.addPtr(
-                    CCallHelpers::TrustedImm32(-params.proc().frameSize()),
-                    GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
-
-                jit.addLinkTask(
-                    [=] (LinkBuffer&amp; linkBuffer) {
-                        MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
-                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
-
-                        callLinkInfo-&gt;setCallLocations(
-                            linkBuffer.locationOfNearCall(slowCall),
-                            linkBuffer.locationOf(targetToCheck),
-                            linkBuffer.locationOfNearCall(fastCall));
-                    });
-            });
-
-        setJSValue(patchpoint);
-    }
-
-    void compileTailCall()
-    {
-        Node* node = m_node;
-        unsigned numArgs = node-&gt;numChildren() - 1;
-
-        LValue jsCallee = lowJSValue(m_graph.varArgChild(node, 0));
-        
-        // We want B3 to give us all of the arguments using whatever mechanism it thinks is
-        // convenient. The generator then shuffles those arguments into our own call frame,
-        // destroying our frame in the process.
-
-        // Note that we don't have to do anything special for exceptions. A tail call is only a
-        // tail call if it is not inside a try block.
-
-        Vector&lt;ConstrainedValue&gt; arguments;
-
-        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
-
-        for (unsigned i = 0; i &lt; numArgs; ++i) {
-            // Note: we could let the shuffler do boxing for us, but it's not super clear that this
-            // would be better. Also, if we wanted to do that, then we'd have to teach the shuffler
-            // that 32-bit values could land at 4-byte alignment but not 8-byte alignment.
-            
-            ConstrainedValue constrainedValue(
-                lowJSValue(m_graph.varArgChild(node, 1 + i)),
-                ValueRep::WarmAny);
-            arguments.append(constrainedValue);
-        }
-
-        PatchpointValue* patchpoint = m_out.patchpoint(Void);
-        patchpoint-&gt;appendVector(arguments);
-
-        // Prevent any of the arguments from using the scratch register.
-        patchpoint-&gt;clobberEarly(RegisterSet::macroScratchRegisters());
-
-        // We don't have to tell the patchpoint that we will clobber registers, since we won't return
-        // anyway.
-
-        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-                CallSiteIndex callSiteIndex = state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(codeOrigin);
-
-                CallFrameShuffleData shuffleData;
-                shuffleData.numLocals = state-&gt;jitCode-&gt;common.frameRegisterCount;
-                shuffleData.callee = ValueRecovery::inGPR(GPRInfo::regT0, DataFormatJS);
-
-                for (unsigned i = 0; i &lt; numArgs; ++i)
-                    shuffleData.args.append(params[1 + i].recoveryForJSValue());
-
-                shuffleData.setupCalleeSaveRegisters(jit.codeBlock());
-
-                CallLinkInfo* callLinkInfo = jit.codeBlock()-&gt;addCallLinkInfo();
-
-                CCallHelpers::DataLabelPtr targetToCheck;
-                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
-                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
-                    CCallHelpers::TrustedImmPtr(0));
-
-                callLinkInfo-&gt;setFrameShuffleData(shuffleData);
-                CallFrameShuffler(jit, shuffleData).prepareForTailCall();
-
-                CCallHelpers::Call fastCall = jit.nearTailCall();
-
-                slowPath.link(&amp;jit);
-
-                // Yes, this is really necessary. You could throw an exception in a host call on the
-                // slow path. That'll route us to lookupExceptionHandler(), which unwinds starting
-                // with the call site index of our frame. Bad things happen if it's not set.
-                jit.store32(
-                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
-                    CCallHelpers::tagFor(VirtualRegister(JSStack::ArgumentCount)));
-
-                CallFrameShuffler slowPathShuffler(jit, shuffleData);
-                slowPathShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
-                slowPathShuffler.prepareForSlowPath();
-
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
-                CCallHelpers::Call slowCall = jit.nearCall();
-
-                jit.abortWithReason(JITDidReturnFromTailCall);
-
-                callLinkInfo-&gt;setUpCall(CallLinkInfo::TailCall, codeOrigin, GPRInfo::regT0);
-
-                jit.addLinkTask(
-                    [=] (LinkBuffer&amp; linkBuffer) {
-                        MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
-                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
-
-                        callLinkInfo-&gt;setCallLocations(
-                            linkBuffer.locationOfNearCall(slowCall),
-                            linkBuffer.locationOf(targetToCheck),
-                            linkBuffer.locationOfNearCall(fastCall));
-                    });
-            });
-        
-        m_out.unreachable();
-    }
-    
-    void compileCallOrConstructVarargs()
-    {
-        Node* node = m_node;
-        LValue jsCallee = lowJSValue(m_node-&gt;child1());
-        LValue thisArg = lowJSValue(m_node-&gt;child3());
-        
-        LValue jsArguments = nullptr;
-        bool forwarding = false;
-        
-        switch (node-&gt;op()) {
-        case CallVarargs:
-        case TailCallVarargs:
-        case TailCallVarargsInlinedCaller:
-        case ConstructVarargs:
-            jsArguments = lowJSValue(node-&gt;child2());
-            break;
-        case CallForwardVarargs:
-        case TailCallForwardVarargs:
-        case TailCallForwardVarargsInlinedCaller:
-        case ConstructForwardVarargs:
-            forwarding = true;
-            break;
-        default:
-            DFG_CRASH(m_graph, node, &quot;bad node type&quot;);
-            break;
-        }
-        
-        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-
-        // Append the forms of the arguments that we will use before any clobbering happens.
-        patchpoint-&gt;append(jsCallee, ValueRep::reg(GPRInfo::regT0));
-        if (jsArguments)
-            patchpoint-&gt;appendSomeRegister(jsArguments);
-        patchpoint-&gt;appendSomeRegister(thisArg);
-
-        if (!forwarding) {
-            // Now append them again for after clobbering. Note that the compiler may ask us to use a
-            // different register for the late for the post-clobbering version of the value. This gives
-            // the compiler a chance to spill these values without having to burn any callee-saves.
-            patchpoint-&gt;append(jsCallee, ValueRep::LateColdAny);
-            patchpoint-&gt;append(jsArguments, ValueRep::LateColdAny);
-            patchpoint-&gt;append(thisArg, ValueRep::LateColdAny);
-        }
-
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-        
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-        patchpoint-&gt;clobberLate(RegisterSet::volatileRegistersForJSCall());
-        patchpoint-&gt;resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
-
-        // This is the minimum amount of call arg area stack space that all JS-&gt;JS calls always have.
-        unsigned minimumJSCallAreaSize =
-            sizeof(CallerFrameAndPC) +
-            WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(EncodedJSValue));
-
-        m_proc.requestCallArgAreaSize(minimumJSCallAreaSize);
-        
-        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-                CallSiteIndex callSiteIndex =
-                    state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(codeOrigin);
-
-                Box&lt;CCallHelpers::JumpList&gt; exceptions =
-                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
-
-                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
-
-                jit.store32(
-                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
-                    CCallHelpers::tagFor(VirtualRegister(JSStack::ArgumentCount)));
-
-                CallLinkInfo* callLinkInfo = jit.codeBlock()-&gt;addCallLinkInfo();
-                CallVarargsData* data = node-&gt;callVarargsData();
-
-                unsigned argIndex = 1;
-                GPRReg calleeGPR = params[argIndex++].gpr();
-                ASSERT(calleeGPR == GPRInfo::regT0);
-                GPRReg argumentsGPR = jsArguments ? params[argIndex++].gpr() : InvalidGPRReg;
-                GPRReg thisGPR = params[argIndex++].gpr();
-
-                B3::ValueRep calleeLateRep;
-                B3::ValueRep argumentsLateRep;
-                B3::ValueRep thisLateRep;
-                if (!forwarding) {
-                    // If we're not forwarding then we'll need callee, arguments, and this after we
-                    // have potentially clobbered calleeGPR, argumentsGPR, and thisGPR. Our technique
-                    // for this is to supply all of those operands as late uses in addition to
-                    // specifying them as early uses. It's possible that the late use uses a spill
-                    // while the early use uses a register, and it's possible for the late and early
-                    // uses to use different registers. We do know that the late uses interfere with
-                    // all volatile registers and so won't use those, but the early uses may use
-                    // volatile registers and in the case of calleeGPR, it's pinned to regT0 so it
-                    // definitely will.
-                    //
-                    // Note that we have to be super careful with these. It's possible that these
-                    // use a shuffling of the registers used for calleeGPR, argumentsGPR, and
-                    // thisGPR. If that happens and we do for example:
-                    //
-                    //     calleeLateRep.emitRestore(jit, calleeGPR);
-                    //     argumentsLateRep.emitRestore(jit, calleeGPR);
-                    //
-                    // Then we might end up with garbage if calleeLateRep.gpr() == argumentsGPR and
-                    // argumentsLateRep.gpr() == calleeGPR.
-                    //
-                    // We do a variety of things to prevent this from happening. For example, we use
-                    // argumentsLateRep before needing the other two and after we've already stopped
-                    // using the *GPRs. Also, we pin calleeGPR to regT0, and rely on the fact that
-                    // the *LateReps cannot use volatile registers (so they cannot be regT0, so
-                    // calleeGPR != argumentsLateRep.gpr() and calleeGPR != thisLateRep.gpr()).
-                    //
-                    // An alternative would have been to just use early uses and early-clobber all
-                    // volatile registers. But that would force callee, arguments, and this into
-                    // callee-save registers even if we have to spill them. We don't want spilling to
-                    // use up three callee-saves.
-                    //
-                    // TL;DR: The way we use LateReps here is dangerous and barely works but achieves
-                    // some desirable performance properties, so don't mistake the cleverness for
-                    // elegance.
-                    calleeLateRep = params[argIndex++];
-                    argumentsLateRep = params[argIndex++];
-                    thisLateRep = params[argIndex++];
-                }
-
-                // Get some scratch registers.
-                RegisterSet usedRegisters;
-                usedRegisters.merge(RegisterSet::stackRegisters());
-                usedRegisters.merge(RegisterSet::reservedHardwareRegisters());
-                usedRegisters.merge(RegisterSet::calleeSaveRegisters());
-                usedRegisters.set(calleeGPR);
-                if (argumentsGPR != InvalidGPRReg)
-                    usedRegisters.set(argumentsGPR);
-                usedRegisters.set(thisGPR);
-                if (calleeLateRep.isReg())
-                    usedRegisters.set(calleeLateRep.reg());
-                if (argumentsLateRep.isReg())
-                    usedRegisters.set(argumentsLateRep.reg());
-                if (thisLateRep.isReg())
-                    usedRegisters.set(thisLateRep.reg());
-                ScratchRegisterAllocator allocator(usedRegisters);
-                GPRReg scratchGPR1 = allocator.allocateScratchGPR();
-                GPRReg scratchGPR2 = allocator.allocateScratchGPR();
-                GPRReg scratchGPR3 = forwarding ? allocator.allocateScratchGPR() : InvalidGPRReg;
-                RELEASE_ASSERT(!allocator.numberOfReusedRegisters());
-
-                auto callWithExceptionCheck = [&amp;] (void* callee) {
-                    jit.move(CCallHelpers::TrustedImmPtr(callee), GPRInfo::nonPreservedNonArgumentGPR);
-                    jit.call(GPRInfo::nonPreservedNonArgumentGPR);
-                    exceptions-&gt;append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
-                };
-
-                auto adjustStack = [&amp;] (GPRReg amount) {
-                    jit.addPtr(CCallHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), amount, CCallHelpers::stackPointerRegister);
-                };
-
-                unsigned originalStackHeight = params.proc().frameSize();
-
-                if (forwarding) {
-                    jit.move(CCallHelpers::TrustedImm32(originalStackHeight / sizeof(EncodedJSValue)), scratchGPR2);
-                    
-                    CCallHelpers::JumpList slowCase;
-                    emitSetupVarargsFrameFastCase(jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, node-&gt;child2()-&gt;origin.semantic.inlineCallFrame, data-&gt;firstVarArgOffset, slowCase);
-
-                    CCallHelpers::Jump done = jit.jump();
-                    slowCase.link(&amp;jit);
-                    jit.setupArgumentsExecState();
-                    callWithExceptionCheck(bitwise_cast&lt;void*&gt;(operationThrowStackOverflowForVarargs));
-                    jit.abortWithReason(DFGVarargsThrowingPathDidNotThrow);
-                    
-                    done.link(&amp;jit);
-
-                    adjustStack(scratchGPR2);
-                } else {
-                    jit.move(CCallHelpers::TrustedImm32(originalStackHeight / sizeof(EncodedJSValue)), scratchGPR1);
-                    jit.setupArgumentsWithExecState(argumentsGPR, scratchGPR1, CCallHelpers::TrustedImm32(data-&gt;firstVarArgOffset));
-                    callWithExceptionCheck(bitwise_cast&lt;void*&gt;(operationSizeFrameForVarargs));
-
-                    jit.move(GPRInfo::returnValueGPR, scratchGPR1);
-                    jit.move(CCallHelpers::TrustedImm32(originalStackHeight / sizeof(EncodedJSValue)), scratchGPR2);
-                    argumentsLateRep.emitRestore(jit, argumentsGPR);
-                    emitSetVarargsFrame(jit, scratchGPR1, false, scratchGPR2, scratchGPR2);
-                    jit.addPtr(CCallHelpers::TrustedImm32(-minimumJSCallAreaSize), scratchGPR2, CCallHelpers::stackPointerRegister);
-                    jit.setupArgumentsWithExecState(scratchGPR2, argumentsGPR, CCallHelpers::TrustedImm32(data-&gt;firstVarArgOffset), scratchGPR1);
-                    callWithExceptionCheck(bitwise_cast&lt;void*&gt;(operationSetupVarargsFrame));
-                    
-                    adjustStack(GPRInfo::returnValueGPR);
-
-                    calleeLateRep.emitRestore(jit, GPRInfo::regT0);
-
-                    // This may not emit code if thisGPR got a callee-save. Also, we're guaranteed
-                    // that thisGPR != GPRInfo::regT0 because regT0 interferes with it.
-                    thisLateRep.emitRestore(jit, thisGPR);
-                }
-                
-                jit.store64(GPRInfo::regT0, CCallHelpers::calleeFrameSlot(JSStack::Callee));
-                jit.store64(thisGPR, CCallHelpers::calleeArgumentSlot(0));
-                
-                CallLinkInfo::CallType callType;
-                if (node-&gt;op() == ConstructVarargs || node-&gt;op() == ConstructForwardVarargs)
-                    callType = CallLinkInfo::ConstructVarargs;
-                else if (node-&gt;op() == TailCallVarargs || node-&gt;op() == TailCallForwardVarargs)
-                    callType = CallLinkInfo::TailCallVarargs;
-                else
-                    callType = CallLinkInfo::CallVarargs;
-                
-                bool isTailCall = CallLinkInfo::callModeFor(callType) == CallMode::Tail;
-                
-                CCallHelpers::DataLabelPtr targetToCheck;
-                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
-                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
-                    CCallHelpers::TrustedImmPtr(0));
-                
-                CCallHelpers::Call fastCall;
-                CCallHelpers::Jump done;
-                
-                if (isTailCall) {
-                    jit.emitRestoreCalleeSaves();
-                    jit.prepareForTailCallSlow();
-                    fastCall = jit.nearTailCall();
-                } else {
-                    fastCall = jit.nearCall();
-                    done = jit.jump();
-                }
-                
-                slowPath.link(&amp;jit);
-
-                if (isTailCall)
-                    jit.emitRestoreCalleeSaves();
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
-                CCallHelpers::Call slowCall = jit.nearCall();
-                
-                if (isTailCall)
-                    jit.abortWithReason(JITDidReturnFromTailCall);
-                else
-                    done.link(&amp;jit);
-                
-                callLinkInfo-&gt;setUpCall(callType, node-&gt;origin.semantic, GPRInfo::regT0);
-                
-                jit.addPtr(
-                    CCallHelpers::TrustedImm32(-originalStackHeight),
-                    GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
-                
-                jit.addLinkTask(
-                    [=] (LinkBuffer&amp; linkBuffer) {
-                        MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
-                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
-                        
-                        callLinkInfo-&gt;setCallLocations(
-                            linkBuffer.locationOfNearCall(slowCall),
-                            linkBuffer.locationOf(targetToCheck),
-                            linkBuffer.locationOfNearCall(fastCall));
-                    });
-            });
-
-        switch (node-&gt;op()) {
-        case TailCallVarargs:
-        case TailCallForwardVarargs:
-            m_out.unreachable();
-            break;
-
-        default:
-            setJSValue(patchpoint);
-            break;
-        }
-    }
-    
-    void compileLoadVarargs()
-    {
-        LoadVarargsData* data = m_node-&gt;loadVarargsData();
-        LValue jsArguments = lowJSValue(m_node-&gt;child1());
-        
-        LValue length = vmCall(
-            m_out.int32, m_out.operation(operationSizeOfVarargs), m_callFrame, jsArguments,
-            m_out.constInt32(data-&gt;offset));
-        
-        // FIXME: There is a chance that we will call an effectful length property twice. This is safe
-        // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
-        // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
-        // past the sizing.
-        // https://bugs.webkit.org/show_bug.cgi?id=141448
-        
-        LValue lengthIncludingThis = m_out.add(length, m_out.int32One);
-        speculate(
-            VarargsOverflow, noValue(), nullptr,
-            m_out.above(lengthIncludingThis, m_out.constInt32(data-&gt;limit)));
-        
-        m_out.store32(lengthIncludingThis, payloadFor(data-&gt;machineCount));
-        
-        // FIXME: This computation is rather silly. If operationLaodVarargs just took a pointer instead
-        // of a VirtualRegister, we wouldn't have to do this.
-        // https://bugs.webkit.org/show_bug.cgi?id=141660
-        LValue machineStart = m_out.lShr(
-            m_out.sub(addressFor(data-&gt;machineStart.offset()).value(), m_callFrame),
-            m_out.constIntPtr(3));
-        
-        vmCall(
-            m_out.voidType, m_out.operation(operationLoadVarargs), m_callFrame,
-            m_out.castToInt32(machineStart), jsArguments, m_out.constInt32(data-&gt;offset),
-            length, m_out.constInt32(data-&gt;mandatoryMinimum));
-    }
-    
-    void compileForwardVarargs()
-    {
-        LoadVarargsData* data = m_node-&gt;loadVarargsData();
-        InlineCallFrame* inlineCallFrame = m_node-&gt;child1()-&gt;origin.semantic.inlineCallFrame;
-        
-        LValue length = getArgumentsLength(inlineCallFrame).value;
-        LValue lengthIncludingThis = m_out.add(length, m_out.constInt32(1 - data-&gt;offset));
-        
-        speculate(
-            VarargsOverflow, noValue(), nullptr,
-            m_out.above(lengthIncludingThis, m_out.constInt32(data-&gt;limit)));
-        
-        m_out.store32(lengthIncludingThis, payloadFor(data-&gt;machineCount));
-        
-        LValue sourceStart = getArgumentsStart(inlineCallFrame);
-        LValue targetStart = addressFor(data-&gt;machineStart).value();
-
-        LBasicBlock undefinedLoop = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs undefined loop body&quot;));
-        LBasicBlock mainLoopEntry = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs main loop entry&quot;));
-        LBasicBlock mainLoop = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs main loop body&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ForwardVarargs continuation&quot;));
-        
-        LValue lengthAsPtr = m_out.zeroExtPtr(length);
-        LValue loopBoundValue = m_out.constIntPtr(data-&gt;mandatoryMinimum);
-        ValueFromBlock loopBound = m_out.anchor(loopBoundValue);
-        m_out.branch(
-            m_out.above(loopBoundValue, lengthAsPtr), unsure(undefinedLoop), unsure(mainLoopEntry));
-        
-        LBasicBlock lastNext = m_out.appendTo(undefinedLoop, mainLoopEntry);
-        LValue previousIndex = m_out.phi(m_out.intPtr, loopBound);
-        LValue currentIndex = m_out.sub(previousIndex, m_out.intPtrOne);
-        m_out.store64(
-            m_out.constInt64(JSValue::encode(jsUndefined())),
-            m_out.baseIndex(m_heaps.variables, targetStart, currentIndex));
-        ValueFromBlock nextIndex = m_out.anchor(currentIndex);
-        m_out.addIncomingToPhi(previousIndex, nextIndex);
-        m_out.branch(
-            m_out.above(currentIndex, lengthAsPtr), unsure(undefinedLoop), unsure(mainLoopEntry));
-        
-        m_out.appendTo(mainLoopEntry, mainLoop);
-        loopBound = m_out.anchor(lengthAsPtr);
-        m_out.branch(m_out.notNull(lengthAsPtr), unsure(mainLoop), unsure(continuation));
-        
-        m_out.appendTo(mainLoop, continuation);
-        previousIndex = m_out.phi(m_out.intPtr, loopBound);
-        currentIndex = m_out.sub(previousIndex, m_out.intPtrOne);
-        LValue value = m_out.load64(
-            m_out.baseIndex(
-                m_heaps.variables, sourceStart,
-                m_out.add(currentIndex, m_out.constIntPtr(data-&gt;offset))));
-        m_out.store64(value, m_out.baseIndex(m_heaps.variables, targetStart, currentIndex));
-        nextIndex = m_out.anchor(currentIndex);
-        m_out.addIncomingToPhi(previousIndex, nextIndex);
-        m_out.branch(m_out.isNull(currentIndex), unsure(continuation), unsure(mainLoop));
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-
-    void compileJump()
-    {
-        m_out.jump(lowBlock(m_node-&gt;targetBlock()));
-    }
-    
-    void compileBranch()
-    {
-        m_out.branch(
-            boolify(m_node-&gt;child1()),
-            WeightedTarget(
-                lowBlock(m_node-&gt;branchData()-&gt;taken.block),
-                m_node-&gt;branchData()-&gt;taken.count),
-            WeightedTarget(
-                lowBlock(m_node-&gt;branchData()-&gt;notTaken.block),
-                m_node-&gt;branchData()-&gt;notTaken.count));
-    }
-    
-    void compileSwitch()
-    {
-        SwitchData* data = m_node-&gt;switchData();
-        switch (data-&gt;kind) {
-        case SwitchImm: {
-            Vector&lt;ValueFromBlock, 2&gt; intValues;
-            LBasicBlock switchOnInts = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm int case&quot;));
-            
-            LBasicBlock lastNext = m_out.appendTo(m_out.m_block, switchOnInts);
-            
-            switch (m_node-&gt;child1().useKind()) {
-            case Int32Use: {
-                intValues.append(m_out.anchor(lowInt32(m_node-&gt;child1())));
-                m_out.jump(switchOnInts);
-                break;
-            }
-                
-            case UntypedUse: {
-                LBasicBlock isInt = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm is int&quot;));
-                LBasicBlock isNotInt = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm is not int&quot;));
-                LBasicBlock isDouble = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchImm is double&quot;));
-                
-                LValue boxedValue = lowJSValue(m_node-&gt;child1());
-                m_out.branch(isNotInt32(boxedValue), unsure(isNotInt), unsure(isInt));
-                
-                LBasicBlock innerLastNext = m_out.appendTo(isInt, isNotInt);
-                
-                intValues.append(m_out.anchor(unboxInt32(boxedValue)));
-                m_out.jump(switchOnInts);
-                
-                m_out.appendTo(isNotInt, isDouble);
-                m_out.branch(
-                    isCellOrMisc(boxedValue, provenType(m_node-&gt;child1())),
-                    usually(lowBlock(data-&gt;fallThrough.block)), rarely(isDouble));
-                
-                m_out.appendTo(isDouble, innerLastNext);
-                LValue doubleValue = unboxDouble(boxedValue);
-                LValue intInDouble = m_out.doubleToInt(doubleValue);
-                intValues.append(m_out.anchor(intInDouble));
-                m_out.branch(
-                    m_out.doubleEqual(m_out.intToDouble(intInDouble), doubleValue),
-                    unsure(switchOnInts), unsure(lowBlock(data-&gt;fallThrough.block)));
-                break;
-            }
-                
-            default:
-                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-                break;
-            }
-            
-            m_out.appendTo(switchOnInts, lastNext);
-            buildSwitch(data, m_out.int32, m_out.phi(m_out.int32, intValues));
-            return;
-        }
-        
-        case SwitchChar: {
-            LValue stringValue;
-            
-            // FIXME: We should use something other than unsure() for the branch weight
-            // of the fallThrough block. The main challenge is just that we have multiple
-            // branches to fallThrough but a single count, so we would need to divvy it up
-            // among the different lowered branches.
-            // https://bugs.webkit.org/show_bug.cgi?id=129082
-            
-            switch (m_node-&gt;child1().useKind()) {
-            case StringUse: {
-                stringValue = lowString(m_node-&gt;child1());
-                break;
-            }
-                
-            case UntypedUse: {
-                LValue unboxedValue = lowJSValue(m_node-&gt;child1());
-                
-                LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar is cell&quot;));
-                LBasicBlock isStringCase = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar is string&quot;));
-                
-                m_out.branch(
-                    isNotCell(unboxedValue, provenType(m_node-&gt;child1())),
-                    unsure(lowBlock(data-&gt;fallThrough.block)), unsure(isCellCase));
-                
-                LBasicBlock lastNext = m_out.appendTo(isCellCase, isStringCase);
-                LValue cellValue = unboxedValue;
-                m_out.branch(
-                    isNotString(cellValue, provenType(m_node-&gt;child1())),
-                    unsure(lowBlock(data-&gt;fallThrough.block)), unsure(isStringCase));
-                
-                m_out.appendTo(isStringCase, lastNext);
-                stringValue = cellValue;
-                break;
-            }
-                
-            default:
-                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-                break;
-            }
-            
-            LBasicBlock lengthIs1 = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar length is 1&quot;));
-            LBasicBlock needResolution = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar resolution&quot;));
-            LBasicBlock resolved = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar resolved&quot;));
-            LBasicBlock is8Bit = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar 8bit&quot;));
-            LBasicBlock is16Bit = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar 16bit&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchChar continuation&quot;));
-            
-            m_out.branch(
-                m_out.notEqual(
-                    m_out.load32NonNegative(stringValue, m_heaps.JSString_length),
-                    m_out.int32One),
-                unsure(lowBlock(data-&gt;fallThrough.block)), unsure(lengthIs1));
-            
-            LBasicBlock lastNext = m_out.appendTo(lengthIs1, needResolution);
-            Vector&lt;ValueFromBlock, 2&gt; values;
-            LValue fastValue = m_out.loadPtr(stringValue, m_heaps.JSString_value);
-            values.append(m_out.anchor(fastValue));
-            m_out.branch(m_out.isNull(fastValue), rarely(needResolution), usually(resolved));
-            
-            m_out.appendTo(needResolution, resolved);
-            values.append(m_out.anchor(
-                vmCall(m_out.intPtr, m_out.operation(operationResolveRope), m_callFrame, stringValue)));
-            m_out.jump(resolved);
-            
-            m_out.appendTo(resolved, is8Bit);
-            LValue value = m_out.phi(m_out.intPtr, values);
-            LValue characterData = m_out.loadPtr(value, m_heaps.StringImpl_data);
-            m_out.branch(
-                m_out.testNonZero32(
-                    m_out.load32(value, m_heaps.StringImpl_hashAndFlags),
-                    m_out.constInt32(StringImpl::flagIs8Bit())),
-                unsure(is8Bit), unsure(is16Bit));
-            
-            Vector&lt;ValueFromBlock, 2&gt; characters;
-            m_out.appendTo(is8Bit, is16Bit);
-            characters.append(m_out.anchor(m_out.load8ZeroExt32(characterData, m_heaps.characters8[0])));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(is16Bit, continuation);
-            characters.append(m_out.anchor(m_out.load16ZeroExt32(characterData, m_heaps.characters16[0])));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            buildSwitch(data, m_out.int32, m_out.phi(m_out.int32, characters));
-            return;
-        }
-        
-        case SwitchString: {
-            switch (m_node-&gt;child1().useKind()) {
-            case StringIdentUse: {
-                LValue stringImpl = lowStringIdent(m_node-&gt;child1());
-                
-                Vector&lt;SwitchCase&gt; cases;
-                for (unsigned i = 0; i &lt; data-&gt;cases.size(); ++i) {
-                    LValue value = m_out.constIntPtr(data-&gt;cases[i].value.stringImpl());
-                    LBasicBlock block = lowBlock(data-&gt;cases[i].target.block);
-                    Weight weight = Weight(data-&gt;cases[i].target.count);
-                    cases.append(SwitchCase(value, block, weight));
-                }
-                
-                m_out.switchInstruction(
-                    stringImpl, cases, lowBlock(data-&gt;fallThrough.block),
-                    Weight(data-&gt;fallThrough.count));
-                return;
-            }
-                
-            case StringUse: {
-                switchString(data, lowString(m_node-&gt;child1()));
-                return;
-            }
-                
-            case UntypedUse: {
-                LValue value = lowJSValue(m_node-&gt;child1());
-                
-                LBasicBlock isCellBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString Untyped cell case&quot;));
-                LBasicBlock isStringBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString Untyped string case&quot;));
-                
-                m_out.branch(
-                    isCell(value, provenType(m_node-&gt;child1())),
-                    unsure(isCellBlock), unsure(lowBlock(data-&gt;fallThrough.block)));
-                
-                LBasicBlock lastNext = m_out.appendTo(isCellBlock, isStringBlock);
-                
-                m_out.branch(
-                    isString(value, provenType(m_node-&gt;child1())),
-                    unsure(isStringBlock), unsure(lowBlock(data-&gt;fallThrough.block)));
-                
-                m_out.appendTo(isStringBlock, lastNext);
-                
-                switchString(data, value);
-                return;
-            }
-                
-            default:
-                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-                return;
-            }
-            return;
-        }
-            
-        case SwitchCell: {
-            LValue cell;
-            switch (m_node-&gt;child1().useKind()) {
-            case CellUse: {
-                cell = lowCell(m_node-&gt;child1());
-                break;
-            }
-                
-            case UntypedUse: {
-                LValue value = lowJSValue(m_node-&gt;child1());
-                LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchCell cell case&quot;));
-                m_out.branch(
-                    isCell(value, provenType(m_node-&gt;child1())),
-                    unsure(cellCase), unsure(lowBlock(data-&gt;fallThrough.block)));
-                m_out.appendTo(cellCase);
-                cell = value;
-                break;
-            }
-                
-            default:
-                DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-                return;
-            }
-            
-            buildSwitch(m_node-&gt;switchData(), m_out.intPtr, cell);
-            return;
-        } }
-        
-        DFG_CRASH(m_graph, m_node, &quot;Bad switch kind&quot;);
-    }
-    
-    void compileReturn()
-    {
-        m_out.ret(lowJSValue(m_node-&gt;child1()));
-    }
-    
-    void compileForceOSRExit()
-    {
-        terminate(InadequateCoverage);
-    }
-    
-    void compileThrow()
-    {
-        terminate(Uncountable);
-    }
-    
-    void compileInvalidationPoint()
-    {
-        if (verboseCompilationEnabled())
-            dataLog(&quot;    Invalidation point with availability: &quot;, availabilityMap(), &quot;\n&quot;);
-
-        DFG_ASSERT(m_graph, m_node, m_origin.exitOK);
-        
-        PatchpointValue* patchpoint = m_out.patchpoint(Void);
-        OSRExitDescriptor* descriptor = appendOSRExitDescriptor(noValue(), nullptr);
-        NodeOrigin origin = m_origin;
-        patchpoint-&gt;appendColdAnys(buildExitArguments(descriptor, origin.forExit, noValue()));
-        
-        State* state = &amp;m_ftlState;
-
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const B3::StackmapGenerationParams&amp; params) {
-                // The MacroAssembler knows more about this than B3 does. The watchpointLabel() method
-                // will ensure that this is followed by a nop shadow but only when this is actually
-                // necessary.
-                CCallHelpers::Label label = jit.watchpointLabel();
-
-                RefPtr&lt;OSRExitHandle&gt; handle = descriptor-&gt;emitOSRExitLater(
-                    *state, UncountableInvalidation, origin, params);
-
-                RefPtr&lt;JITCode&gt; jitCode = state-&gt;jitCode.get();
-
-                jit.addLinkTask(
-                    [=] (LinkBuffer&amp; linkBuffer) {
-                        JumpReplacement jumpReplacement(
-                            linkBuffer.locationOf(label),
-                            linkBuffer.locationOf(handle-&gt;label));
-                        jitCode-&gt;common.jumpReplacements.append(jumpReplacement);
-                    });
-            });
-
-        // Set some obvious things.
-        patchpoint-&gt;effects.terminal = false;
-        patchpoint-&gt;effects.writesLocalState = false;
-        patchpoint-&gt;effects.readsLocalState = false;
-        
-        // This is how we tell B3 about the possibility of jump replacement.
-        patchpoint-&gt;effects.exitsSideways = true;
-        
-        // It's not possible for some prior branch to determine the safety of this operation. It's always
-        // fine to execute this on some path that wouldn't have originally executed it before
-        // optimization.
-        patchpoint-&gt;effects.controlDependent = false;
-
-        // If this falls through then it won't write anything.
-        patchpoint-&gt;effects.writes = HeapRange();
-
-        // When this abruptly terminates, it could read any heap location.
-        patchpoint-&gt;effects.reads = HeapRange::top();
-    }
-    
-    void compileIsUndefined()
-    {
-        setBoolean(equalNullOrUndefined(m_node-&gt;child1(), AllCellsAreFalse, EqualUndefined));
-    }
-    
-    void compileIsBoolean()
-    {
-        setBoolean(isBoolean(lowJSValue(m_node-&gt;child1()), provenType(m_node-&gt;child1())));
-    }
-    
-    void compileIsNumber()
-    {
-        setBoolean(isNumber(lowJSValue(m_node-&gt;child1()), provenType(m_node-&gt;child1())));
-    }
-    
-    void compileIsString()
-    {
-        LValue value = lowJSValue(m_node-&gt;child1());
-        
-        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;IsString cell case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsString continuation&quot;));
-        
-        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(
-            isCell(value, provenType(m_node-&gt;child1())), unsure(isCellCase), unsure(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(isCellCase, continuation);
-        ValueFromBlock cellResult = m_out.anchor(isString(value, provenType(m_node-&gt;child1())));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(m_out.phi(m_out.boolean, notCellResult, cellResult));
-    }
-
-    void compileIsObject()
-    {
-        LValue value = lowJSValue(m_node-&gt;child1());
-
-        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;IsObject cell case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsObject continuation&quot;));
-
-        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(
-            isCell(value, provenType(m_node-&gt;child1())), unsure(isCellCase), unsure(continuation));
-
-        LBasicBlock lastNext = m_out.appendTo(isCellCase, continuation);
-        ValueFromBlock cellResult = m_out.anchor(isObject(value, provenType(m_node-&gt;child1())));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(m_out.phi(m_out.boolean, notCellResult, cellResult));
-    }
-
-    void compileIsObjectOrNull()
-    {
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        
-        Edge child = m_node-&gt;child1();
-        LValue value = lowJSValue(child);
-        
-        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull cell case&quot;));
-        LBasicBlock notFunctionCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull not function case&quot;));
-        LBasicBlock objectCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull object case&quot;));
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull slow path&quot;));
-        LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull not cell case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsObjectOrNull continuation&quot;));
-        
-        m_out.branch(isCell(value, provenType(child)), unsure(cellCase), unsure(notCellCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(cellCase, notFunctionCase);
-        ValueFromBlock isFunctionResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(
-            isFunction(value, provenType(child)),
-            unsure(continuation), unsure(notFunctionCase));
-        
-        m_out.appendTo(notFunctionCase, objectCase);
-        ValueFromBlock notObjectResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(
-            isObject(value, provenType(child)),
-            unsure(objectCase), unsure(continuation));
-        
-        m_out.appendTo(objectCase, slowPath);
-        ValueFromBlock objectResult = m_out.anchor(m_out.booleanTrue);
-        m_out.branch(
-            isExoticForTypeof(value, provenType(child)),
-            rarely(slowPath), usually(continuation));
-        
-        m_out.appendTo(slowPath, notCellCase);
-        LValue slowResultValue = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationObjectIsObject, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
-            }, value);
-        ValueFromBlock slowResult = m_out.anchor(m_out.notZero64(slowResultValue));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(notCellCase, continuation);
-        LValue notCellResultValue = m_out.equal(value, m_out.constInt64(JSValue::encode(jsNull())));
-        ValueFromBlock notCellResult = m_out.anchor(notCellResultValue);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        LValue result = m_out.phi(
-            m_out.boolean,
-            isFunctionResult, notObjectResult, objectResult, slowResult, notCellResult);
-        setBoolean(result);
-    }
-    
-    void compileIsFunction()
-    {
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        
-        Edge child = m_node-&gt;child1();
-        LValue value = lowJSValue(child);
-        
-        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;IsFunction cell case&quot;));
-        LBasicBlock notFunctionCase = FTL_NEW_BLOCK(m_out, (&quot;IsFunction not function case&quot;));
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;IsFunction slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;IsFunction continuation&quot;));
-        
-        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(
-            isCell(value, provenType(child)), unsure(cellCase), unsure(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(cellCase, notFunctionCase);
-        ValueFromBlock functionResult = m_out.anchor(m_out.booleanTrue);
-        m_out.branch(
-            isFunction(value, provenType(child)),
-            unsure(continuation), unsure(notFunctionCase));
-        
-        m_out.appendTo(notFunctionCase, slowPath);
-        ValueFromBlock objectResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(
-            isExoticForTypeof(value, provenType(child)),
-            rarely(slowPath), usually(continuation));
-        
-        m_out.appendTo(slowPath, continuation);
-        LValue slowResultValue = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationObjectIsFunction, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
-            }, value);
-        ValueFromBlock slowResult = m_out.anchor(m_out.notNull(slowResultValue));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        LValue result = m_out.phi(
-            m_out.boolean, notCellResult, functionResult, objectResult, slowResult);
-        setBoolean(result);
-    }
-    
-    void compileTypeOf()
-    {
-        Edge child = m_node-&gt;child1();
-        LValue value = lowJSValue(child);
-        
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;TypeOf continuation&quot;));
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
-        
-        Vector&lt;ValueFromBlock&gt; results;
-        
-        buildTypeOf(
-            child, value,
-            [&amp;] (TypeofType type) {
-                results.append(m_out.anchor(weakPointer(vm().smallStrings.typeString(type))));
-                m_out.jump(continuation);
-            });
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, results));
-    }
-    
-    void compileIn()
-    {
-        Node* node = m_node;
-        Edge base = node-&gt;child2();
-        LValue cell = lowCell(base);
-        if (JSString* string = node-&gt;child1()-&gt;dynamicCastConstant&lt;JSString*&gt;()) {
-            if (string-&gt;tryGetValueImpl() &amp;&amp; string-&gt;tryGetValueImpl()-&gt;isAtomic()) {
-                UniquedStringImpl* str = bitwise_cast&lt;UniquedStringImpl*&gt;(string-&gt;tryGetValueImpl());
-                B3::PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-                patchpoint-&gt;appendSomeRegister(cell);
-                patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-
-                State* state = &amp;m_ftlState;
-                patchpoint-&gt;setGenerator(
-                    [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                        AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                        GPRReg baseGPR = params[1].gpr();
-                        GPRReg resultGPR = params[0].gpr();
-
-                        StructureStubInfo* stubInfo =
-                            jit.codeBlock()-&gt;addStubInfo(AccessType::In);
-                        stubInfo-&gt;callSiteIndex =
-                            state-&gt;jitCode-&gt;common.addCodeOrigin(node-&gt;origin.semantic);
-                        stubInfo-&gt;codeOrigin = node-&gt;origin.semantic;
-                        stubInfo-&gt;patch.baseGPR = static_cast&lt;int8_t&gt;(baseGPR);
-                        stubInfo-&gt;patch.valueGPR = static_cast&lt;int8_t&gt;(resultGPR);
-                        stubInfo-&gt;patch.usedRegisters = params.unavailableRegisters();
-
-                        CCallHelpers::PatchableJump jump = jit.patchableJump();
-                        CCallHelpers::Label done = jit.label();
-
-                        params.addLatePath(
-                            [=] (CCallHelpers&amp; jit) {
-                                AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                                jump.m_jump.link(&amp;jit);
-                                CCallHelpers::Label slowPathBegin = jit.label();
-                                CCallHelpers::Call slowPathCall = callOperation(
-                                    *state, params.unavailableRegisters(), jit,
-                                    node-&gt;origin.semantic, nullptr, operationInOptimize,
-                                    resultGPR, CCallHelpers::TrustedImmPtr(stubInfo), baseGPR,
-                                    CCallHelpers::TrustedImmPtr(str)).call();
-                                jit.jump().linkTo(done, &amp;jit);
-
-                                jit.addLinkTask(
-                                    [=] (LinkBuffer&amp; linkBuffer) {
-                                        CodeLocationCall callReturnLocation =
-                                            linkBuffer.locationOf(slowPathCall);
-                                        stubInfo-&gt;patch.deltaCallToDone =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(done));
-                                        stubInfo-&gt;patch.deltaCallToJump =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(jump));
-                                        stubInfo-&gt;callReturnLocation = callReturnLocation;
-                                        stubInfo-&gt;patch.deltaCallToSlowCase =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(slowPathBegin));
-                                    });
-                            });
-                    });
-
-                setJSValue(patchpoint);
-                return;
-            }
-        } 
-
-        setJSValue(vmCall(m_out.int64, m_out.operation(operationGenericIn), m_callFrame, cell, lowJSValue(m_node-&gt;child1())));
-    }
-
-    void compileOverridesHasInstance()
-    {
-        JSFunction* defaultHasInstanceFunction = jsCast&lt;JSFunction*&gt;(m_node-&gt;cellOperand()-&gt;value());
-
-        LValue constructor = lowCell(m_node-&gt;child1());
-        LValue hasInstance = lowJSValue(m_node-&gt;child2());
-
-        LBasicBlock defaultHasInstance = FTL_NEW_BLOCK(m_out, (&quot;OverridesHasInstance Symbol.hasInstance is default&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;OverridesHasInstance continuation&quot;));
-
-        // Unlike in the DFG, we don't worry about cleaning this code up for the case where we have proven the hasInstanceValue is a constant as LLVM should fix it for us.
-
-        ASSERT(!m_node-&gt;child2().node()-&gt;isCellConstant() || defaultHasInstanceFunction == m_node-&gt;child2().node()-&gt;asCell());
-
-        ValueFromBlock notDefaultHasInstanceResult = m_out.anchor(m_out.booleanTrue);
-        m_out.branch(m_out.notEqual(hasInstance, m_out.constIntPtr(defaultHasInstanceFunction)), unsure(continuation), unsure(defaultHasInstance));
-
-        LBasicBlock lastNext = m_out.appendTo(defaultHasInstance, continuation);
-        ValueFromBlock implementsDefaultHasInstanceResult = m_out.anchor(m_out.testIsZero32(
-            m_out.load8ZeroExt32(constructor, m_heaps.JSCell_typeInfoFlags),
-            m_out.constInt32(ImplementsDefaultHasInstance)));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(m_out.phi(m_out.boolean, implementsDefaultHasInstanceResult, notDefaultHasInstanceResult));
-    }
-
-    void compileCheckTypeInfoFlags()
-    {
-        speculate(
-            BadTypeInfoFlags, noValue(), 0,
-            m_out.testIsZero32(
-                m_out.load8ZeroExt32(lowCell(m_node-&gt;child1()), m_heaps.JSCell_typeInfoFlags),
-                m_out.constInt32(m_node-&gt;typeInfoOperand())));
-    }
-    
-    void compileInstanceOf()
-    {
-        LValue cell;
-        
-        if (m_node-&gt;child1().useKind() == UntypedUse)
-            cell = lowJSValue(m_node-&gt;child1());
-        else
-            cell = lowCell(m_node-&gt;child1());
-        
-        LValue prototype = lowCell(m_node-&gt;child2());
-        
-        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf cell case&quot;));
-        LBasicBlock loop = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf loop&quot;));
-        LBasicBlock notYetInstance = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf not yet instance&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;InstanceOf continuation&quot;));
-        
-        LValue condition;
-        if (m_node-&gt;child1().useKind() == UntypedUse)
-            condition = isCell(cell, provenType(m_node-&gt;child1()));
-        else
-            condition = m_out.booleanTrue;
-        
-        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
-        m_out.branch(condition, unsure(isCellCase), unsure(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(isCellCase, loop);
-        
-        speculate(BadType, noValue(), 0, isNotObject(prototype, provenType(m_node-&gt;child2())));
-        
-        ValueFromBlock originalValue = m_out.anchor(cell);
-        m_out.jump(loop);
-        
-        m_out.appendTo(loop, notYetInstance);
-        LValue value = m_out.phi(m_out.int64, originalValue);
-        LValue structure = loadStructure(value);
-        LValue currentPrototype = m_out.load64(structure, m_heaps.Structure_prototype);
-        ValueFromBlock isInstanceResult = m_out.anchor(m_out.booleanTrue);
-        m_out.branch(
-            m_out.equal(currentPrototype, prototype),
-            unsure(continuation), unsure(notYetInstance));
-        
-        m_out.appendTo(notYetInstance, continuation);
-        ValueFromBlock notInstanceResult = m_out.anchor(m_out.booleanFalse);
-        m_out.addIncomingToPhi(value, m_out.anchor(currentPrototype));
-        m_out.branch(isCell(currentPrototype), unsure(loop), unsure(continuation));
-        
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(
-            m_out.phi(m_out.boolean, notCellResult, isInstanceResult, notInstanceResult));
-    }
-
-    void compileInstanceOfCustom()
-    {
-        LValue value = lowJSValue(m_node-&gt;child1());
-        LValue constructor = lowCell(m_node-&gt;child2());
-        LValue hasInstance = lowJSValue(m_node-&gt;child3());
-
-        setBoolean(m_out.logicalNot(m_out.equal(m_out.constInt32(0), vmCall(m_out.int32, m_out.operation(operationInstanceOfCustom), m_callFrame, value, constructor, hasInstance))));
-    }
-    
-    void compileCountExecution()
-    {
-        TypedPointer counter = m_out.absolute(m_node-&gt;executionCounter()-&gt;address());
-        m_out.store64(m_out.add(m_out.load64(counter), m_out.constInt64(1)), counter);
-    }
-    
-    void compileStoreBarrier()
-    {
-        emitStoreBarrier(lowCell(m_node-&gt;child1()));
-    }
-
-    void compileHasIndexedProperty()
-    {
-        switch (m_node-&gt;arrayMode().type()) {
-        case Array::Int32:
-        case Array::Contiguous: {
-            LValue base = lowCell(m_node-&gt;child1());
-            LValue index = lowInt32(m_node-&gt;child2());
-            LValue storage = lowStorage(m_node-&gt;child3());
-
-            IndexedAbstractHeap&amp; heap = m_node-&gt;arrayMode().type() == Array::Int32 ?
-                m_heaps.indexedInt32Properties : m_heaps.indexedContiguousProperties;
-
-            LBasicBlock checkHole = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty int/contiguous check hole&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty int/contiguous slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty int/contiguous continuation&quot;));
-
-            if (!m_node-&gt;arrayMode().isInBounds()) {
-                m_out.branch(
-                    m_out.aboveOrEqual(
-                        index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
-                    rarely(slowCase), usually(checkHole));
-            } else
-                m_out.jump(checkHole);
-
-            LBasicBlock lastNext = m_out.appendTo(checkHole, slowCase);
-            LValue checkHoleResultValue =
-                m_out.notZero64(m_out.load64(baseIndex(heap, storage, index, m_node-&gt;child2())));
-            ValueFromBlock checkHoleResult = m_out.anchor(checkHoleResultValue);
-            m_out.branch(checkHoleResultValue, usually(continuation), rarely(slowCase));
-
-            m_out.appendTo(slowCase, continuation);
-            ValueFromBlock slowResult = m_out.anchor(m_out.equal(
-                m_out.constInt64(JSValue::encode(jsBoolean(true))), 
-                vmCall(m_out.int64, m_out.operation(operationHasIndexedProperty), m_callFrame, base, index)));
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            setBoolean(m_out.phi(m_out.boolean, checkHoleResult, slowResult));
-            return;
-        }
-        case Array::Double: {
-            LValue base = lowCell(m_node-&gt;child1());
-            LValue index = lowInt32(m_node-&gt;child2());
-            LValue storage = lowStorage(m_node-&gt;child3());
-            
-            IndexedAbstractHeap&amp; heap = m_heaps.indexedDoubleProperties;
-            
-            LBasicBlock checkHole = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty double check hole&quot;));
-            LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty double slow case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;HasIndexedProperty double continuation&quot;));
-            
-            if (!m_node-&gt;arrayMode().isInBounds()) {
-                m_out.branch(
-                    m_out.aboveOrEqual(
-                        index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength)),
-                    rarely(slowCase), usually(checkHole));
-            } else
-                m_out.jump(checkHole);
-
-            LBasicBlock lastNext = m_out.appendTo(checkHole, slowCase);
-            LValue doubleValue = m_out.loadDouble(baseIndex(heap, storage, index, m_node-&gt;child2()));
-            LValue checkHoleResultValue = m_out.doubleEqual(doubleValue, doubleValue);
-            ValueFromBlock checkHoleResult = m_out.anchor(checkHoleResultValue);
-            m_out.branch(checkHoleResultValue, usually(continuation), rarely(slowCase));
-            
-            m_out.appendTo(slowCase, continuation);
-            ValueFromBlock slowResult = m_out.anchor(m_out.equal(
-                m_out.constInt64(JSValue::encode(jsBoolean(true))), 
-                vmCall(m_out.int64, m_out.operation(operationHasIndexedProperty), m_callFrame, base, index)));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            setBoolean(m_out.phi(m_out.boolean, checkHoleResult, slowResult));
-            return;
-        }
-            
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-            return;
-        }
-    }
-
-    void compileHasGenericProperty()
-    {
-        LValue base = lowJSValue(m_node-&gt;child1());
-        LValue property = lowCell(m_node-&gt;child2());
-        setJSValue(vmCall(m_out.int64, m_out.operation(operationHasGenericProperty), m_callFrame, base, property));
-    }
-
-    void compileHasStructureProperty()
-    {
-        LValue base = lowJSValue(m_node-&gt;child1());
-        LValue property = lowString(m_node-&gt;child2());
-        LValue enumerator = lowCell(m_node-&gt;child3());
-
-        LBasicBlock correctStructure = FTL_NEW_BLOCK(m_out, (&quot;HasStructureProperty correct structure&quot;));
-        LBasicBlock wrongStructure = FTL_NEW_BLOCK(m_out, (&quot;HasStructureProperty wrong structure&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;HasStructureProperty continuation&quot;));
-
-        m_out.branch(m_out.notEqual(
-            m_out.load32(base, m_heaps.JSCell_structureID),
-            m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedStructureID)),
-            rarely(wrongStructure), usually(correctStructure));
-
-        LBasicBlock lastNext = m_out.appendTo(correctStructure, wrongStructure);
-        ValueFromBlock correctStructureResult = m_out.anchor(m_out.booleanTrue);
-        m_out.jump(continuation);
-
-        m_out.appendTo(wrongStructure, continuation);
-        ValueFromBlock wrongStructureResult = m_out.anchor(
-            m_out.equal(
-                m_out.constInt64(JSValue::encode(jsBoolean(true))), 
-                vmCall(m_out.int64, m_out.operation(operationHasGenericProperty), m_callFrame, base, property)));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(m_out.phi(m_out.boolean, correctStructureResult, wrongStructureResult));
-    }
-
-    void compileGetDirectPname()
-    {
-        LValue base = lowCell(m_graph.varArgChild(m_node, 0));
-        LValue property = lowCell(m_graph.varArgChild(m_node, 1));
-        LValue index = lowInt32(m_graph.varArgChild(m_node, 2));
-        LValue enumerator = lowCell(m_graph.varArgChild(m_node, 3));
-
-        LBasicBlock checkOffset = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname check offset&quot;));
-        LBasicBlock inlineLoad = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname inline load&quot;));
-        LBasicBlock outOfLineLoad = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname out-of-line load&quot;));
-        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname slow case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetDirectPname continuation&quot;));
-
-        m_out.branch(m_out.notEqual(
-            m_out.load32(base, m_heaps.JSCell_structureID),
-            m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedStructureID)),
-            rarely(slowCase), usually(checkOffset));
-
-        LBasicBlock lastNext = m_out.appendTo(checkOffset, inlineLoad);
-        m_out.branch(m_out.aboveOrEqual(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedInlineCapacity)),
-            unsure(outOfLineLoad), unsure(inlineLoad));
-
-        m_out.appendTo(inlineLoad, outOfLineLoad);
-        ValueFromBlock inlineResult = m_out.anchor(
-            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), 
-                base, m_out.zeroExt(index, m_out.int64), ScaleEight, JSObject::offsetOfInlineStorage())));
-        m_out.jump(continuation);
-
-        m_out.appendTo(outOfLineLoad, slowCase);
-        LValue storage = loadButterflyReadOnly(base);
-        LValue realIndex = m_out.signExt32To64(
-            m_out.neg(m_out.sub(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedInlineCapacity))));
-        int32_t offsetOfFirstProperty = static_cast&lt;int32_t&gt;(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue);
-        ValueFromBlock outOfLineResult = m_out.anchor(
-            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), storage, realIndex, ScaleEight, offsetOfFirstProperty)));
-        m_out.jump(continuation);
-
-        m_out.appendTo(slowCase, continuation);
-        ValueFromBlock slowCaseResult = m_out.anchor(
-            vmCall(m_out.int64, m_out.operation(operationGetByVal), m_callFrame, base, property));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, inlineResult, outOfLineResult, slowCaseResult));
-    }
-
-    void compileGetEnumerableLength()
-    {
-        LValue enumerator = lowCell(m_node-&gt;child1());
-        setInt32(m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_indexLength));
-    }
-
-    void compileGetPropertyEnumerator()
-    {
-        LValue base = lowCell(m_node-&gt;child1());
-        setJSValue(vmCall(m_out.int64, m_out.operation(operationGetPropertyEnumerator), m_callFrame, base));
-    }
-
-    void compileGetEnumeratorStructurePname()
-    {
-        LValue enumerator = lowCell(m_node-&gt;child1());
-        LValue index = lowInt32(m_node-&gt;child2());
-
-        LBasicBlock inBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorStructurePname in bounds&quot;));
-        LBasicBlock outOfBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorStructurePname out of bounds&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorStructurePname continuation&quot;));
-
-        m_out.branch(m_out.below(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_endStructurePropertyIndex)),
-            usually(inBounds), rarely(outOfBounds));
-
-        LBasicBlock lastNext = m_out.appendTo(inBounds, outOfBounds);
-        LValue storage = m_out.loadPtr(enumerator, m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVector);
-        ValueFromBlock inBoundsResult = m_out.anchor(
-            m_out.loadPtr(m_out.baseIndex(m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, storage, m_out.zeroExtPtr(index))));
-        m_out.jump(continuation);
-
-        m_out.appendTo(outOfBounds, continuation);
-        ValueFromBlock outOfBoundsResult = m_out.anchor(m_out.constInt64(ValueNull));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, inBoundsResult, outOfBoundsResult));
-    }
-
-    void compileGetEnumeratorGenericPname()
-    {
-        LValue enumerator = lowCell(m_node-&gt;child1());
-        LValue index = lowInt32(m_node-&gt;child2());
-
-        LBasicBlock inBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorGenericPname in bounds&quot;));
-        LBasicBlock outOfBounds = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorGenericPname out of bounds&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;GetEnumeratorGenericPname continuation&quot;));
-
-        m_out.branch(m_out.below(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_endGenericPropertyIndex)),
-            usually(inBounds), rarely(outOfBounds));
-
-        LBasicBlock lastNext = m_out.appendTo(inBounds, outOfBounds);
-        LValue storage = m_out.loadPtr(enumerator, m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVector);
-        ValueFromBlock inBoundsResult = m_out.anchor(
-            m_out.loadPtr(m_out.baseIndex(m_heaps.JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, storage, m_out.zeroExtPtr(index))));
-        m_out.jump(continuation);
-
-        m_out.appendTo(outOfBounds, continuation);
-        ValueFromBlock outOfBoundsResult = m_out.anchor(m_out.constInt64(ValueNull));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setJSValue(m_out.phi(m_out.int64, inBoundsResult, outOfBoundsResult));
-    }
-    
-    void compileToIndexString()
-    {
-        LValue index = lowInt32(m_node-&gt;child1());
-        setJSValue(vmCall(m_out.int64, m_out.operation(operationToIndexString), m_callFrame, index));
-    }
-    
-    void compileCheckStructureImmediate()
-    {
-        LValue structure = lowCell(m_node-&gt;child1());
-        checkStructure(
-            structure, noValue(), BadCache, m_node-&gt;structureSet(),
-            [this] (Structure* structure) {
-                return weakStructure(structure);
-            });
-    }
-    
-    void compileMaterializeNewObject()
-    {
-        ObjectMaterializationData&amp; data = m_node-&gt;objectMaterializationData();
-        
-        // Lower the values first, to avoid creating values inside a control flow diamond.
-        
-        Vector&lt;LValue, 8&gt; values;
-        for (unsigned i = 0; i &lt; data.m_properties.size(); ++i)
-            values.append(lowJSValue(m_graph.varArgChild(m_node, 1 + i)));
-        
-        const StructureSet&amp; set = m_node-&gt;structureSet();
-        
-        Vector&lt;LBasicBlock, 1&gt; blocks(set.size());
-        for (unsigned i = set.size(); i--;)
-            blocks[i] = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject case &quot;, i));
-        LBasicBlock dummyDefault = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject default case&quot;));
-        LBasicBlock outerContinuation = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject continuation&quot;));
-        
-        Vector&lt;SwitchCase, 1&gt; cases(set.size());
-        for (unsigned i = set.size(); i--;)
-            cases[i] = SwitchCase(weakStructure(set[i]), blocks[i], Weight(1));
-        m_out.switchInstruction(
-            lowCell(m_graph.varArgChild(m_node, 0)), cases, dummyDefault, Weight(0));
-        
-        LBasicBlock outerLastNext = m_out.m_nextBlock;
-        
-        Vector&lt;ValueFromBlock, 1&gt; results;
-        
-        for (unsigned i = set.size(); i--;) {
-            m_out.appendTo(blocks[i], i + 1 &lt; set.size() ? blocks[i + 1] : dummyDefault);
-            
-            Structure* structure = set[i];
-            
-            LValue object;
-            LValue butterfly;
-            
-            if (structure-&gt;outOfLineCapacity()) {
-                size_t allocationSize = JSFinalObject::allocationSize(structure-&gt;inlineCapacity());
-                MarkedAllocator* allocator = &amp;vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
-                
-                LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject complex object allocation slow path&quot;));
-                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MaterializeNewObject complex object allocation continuation&quot;));
-                
-                LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-                
-                LValue endOfStorage = allocateBasicStorageAndGetEnd(
-                    m_out.constIntPtr(structure-&gt;outOfLineCapacity() * sizeof(JSValue)),
-                    slowPath);
-                
-                LValue fastButterflyValue = m_out.add(
-                    m_out.constIntPtr(sizeof(IndexingHeader)), endOfStorage);
-                
-                LValue fastObjectValue = allocateObject(
-                    m_out.constIntPtr(allocator), structure, fastButterflyValue, slowPath);
-
-                ValueFromBlock fastObject = m_out.anchor(fastObjectValue);
-                ValueFromBlock fastButterfly = m_out.anchor(fastButterflyValue);
-                m_out.jump(continuation);
-                
-                m_out.appendTo(slowPath, continuation);
-
-                LValue slowObjectValue = lazySlowPath(
-                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                        return createLazyCallGenerator(
-                            operationNewObjectWithButterfly, locations[0].directGPR(),
-                            CCallHelpers::TrustedImmPtr(structure));
-                    });
-                ValueFromBlock slowObject = m_out.anchor(slowObjectValue);
-                ValueFromBlock slowButterfly = m_out.anchor(
-                    m_out.loadPtr(slowObjectValue, m_heaps.JSObject_butterfly));
-                
-                m_out.jump(continuation);
-                
-                m_out.appendTo(continuation, lastNext);
-                
-                object = m_out.phi(m_out.intPtr, fastObject, slowObject);
-                butterfly = m_out.phi(m_out.intPtr, fastButterfly, slowButterfly);
-            } else {
-                // In the easy case where we can do a one-shot allocation, we simply allocate the
-                // object to directly have the desired structure.
-                object = allocateObject(structure);
-                butterfly = nullptr; // Don't have one, don't need one.
-            }
-            
-            for (PropertyMapEntry entry : structure-&gt;getPropertiesConcurrently()) {
-                for (unsigned i = data.m_properties.size(); i--;) {
-                    PhantomPropertyValue value = data.m_properties[i];
-                    if (m_graph.identifiers()[value.m_identifierNumber] != entry.key)
-                        continue;
-                    
-                    LValue base = isInlineOffset(entry.offset) ? object : butterfly;
-                    storeProperty(values[i], base, value.m_identifierNumber, entry.offset);
-                    break;
-                }
-            }
-            
-            results.append(m_out.anchor(object));
-            m_out.jump(outerContinuation);
-        }
-        
-        m_out.appendTo(dummyDefault, outerContinuation);
-        m_out.unreachable();
-        
-        m_out.appendTo(outerContinuation, outerLastNext);
-        setJSValue(m_out.phi(m_out.intPtr, results));
-    }
-
-    void compileMaterializeCreateActivation()
-    {
-        ObjectMaterializationData&amp; data = m_node-&gt;objectMaterializationData();
-
-        Vector&lt;LValue, 8&gt; values;
-        for (unsigned i = 0; i &lt; data.m_properties.size(); ++i)
-            values.append(lowJSValue(m_graph.varArgChild(m_node, 2 + i)));
-
-        LValue scope = lowCell(m_graph.varArgChild(m_node, 1));
-        SymbolTable* table = m_node-&gt;castOperand&lt;SymbolTable*&gt;();
-        ASSERT(table == m_graph.varArgChild(m_node, 0)-&gt;castConstant&lt;SymbolTable*&gt;());
-        Structure* structure = m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;activationStructure();
-
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;MaterializeCreateActivation slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;MaterializeCreateActivation continuation&quot;));
-
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-
-        LValue fastObject = allocateObject&lt;JSLexicalEnvironment&gt;(
-            JSLexicalEnvironment::allocationSize(table), structure, m_out.intPtrZero, slowPath);
-
-        m_out.storePtr(scope, fastObject, m_heaps.JSScope_next);
-        m_out.storePtr(weakPointer(table), fastObject, m_heaps.JSSymbolTableObject_symbolTable);
-
-
-        ValueFromBlock fastResult = m_out.anchor(fastObject);
-        m_out.jump(continuation);
-
-        m_out.appendTo(slowPath, continuation);
-        // We ensure allocation sinking explictly sets bottom values for all field members. 
-        // Therefore, it doesn't matter what JSValue we pass in as the initialization value
-        // because all fields will be overwritten.
-        // FIXME: It may be worth creating an operation that calls a constructor on JSLexicalEnvironment that 
-        // doesn't initialize every slot because we are guaranteed to do that here.
-        LValue callResult = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationCreateActivationDirect, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
-                    CCallHelpers::TrustedImmPtr(table),
-                    CCallHelpers::TrustedImm64(JSValue::encode(jsUndefined())));
-            }, scope);
-        ValueFromBlock slowResult =  m_out.anchor(callResult);
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        LValue activation = m_out.phi(m_out.intPtr, fastResult, slowResult);
-        RELEASE_ASSERT(data.m_properties.size() == table-&gt;scopeSize());
-        for (unsigned i = 0; i &lt; data.m_properties.size(); ++i) {
-            m_out.store64(values[i],
-                activation,
-                m_heaps.JSEnvironmentRecord_variables[data.m_properties[i].m_identifierNumber]);
-        }
-
-        if (validationEnabled()) {
-            // Validate to make sure every slot in the scope has one value.
-            ConcurrentJITLocker locker(table-&gt;m_lock);
-            for (auto iter = table-&gt;begin(locker), end = table-&gt;end(locker); iter != end; ++iter) {
-                bool found = false;
-                for (unsigned i = 0; i &lt; data.m_properties.size(); ++i) {
-                    if (iter-&gt;value.scopeOffset().offset() == data.m_properties[i].m_identifierNumber) {
-                        found = true;
-                        break;
-                    }
-                }
-                ASSERT_UNUSED(found, found);
-            }
-        }
-
-        setJSValue(activation);
-    }
-
-    void compileCheckWatchdogTimer()
-    {
-        LBasicBlock timerDidFire = FTL_NEW_BLOCK(m_out, (&quot;CheckWatchdogTimer timer did fire&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CheckWatchdogTimer continuation&quot;));
-        
-        LValue state = m_out.load8ZeroExt32(m_out.absolute(vm().watchdog()-&gt;timerDidFireAddress()));
-        m_out.branch(m_out.isZero32(state),
-            usually(continuation), rarely(timerDidFire));
-
-        LBasicBlock lastNext = m_out.appendTo(timerDidFire, continuation);
-
-        lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp;) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(operationHandleWatchdogTimer, InvalidGPRReg);
-            });
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-
-    LValue didOverflowStack()
-    {
-        // This does a very simple leaf function analysis. The invariant of FTL call
-        // frames is that the caller had already done enough of a stack check to
-        // prove that this call frame has enough stack to run, and also enough stack
-        // to make runtime calls. So, we only need to stack check when making calls
-        // to other JS functions. If we don't find such calls then we don't need to
-        // do any stack checks.
-        
-        for (BlockIndex blockIndex = 0; blockIndex &lt; m_graph.numBlocks(); ++blockIndex) {
-            DFG::BasicBlock* block = m_graph.block(blockIndex);
-            if (!block)
-                continue;
-            
-            for (unsigned nodeIndex = block-&gt;size(); nodeIndex--;) {
-                Node* node = block-&gt;at(nodeIndex);
-                
-                switch (node-&gt;op()) {
-                case GetById:
-                case PutById:
-                case Call:
-                case Construct:
-                    return m_out.below(
-                        m_callFrame,
-                        m_out.loadPtr(
-                            m_out.absolute(vm().addressOfFTLStackLimit())));
-                    
-                default:
-                    break;
-                }
-            }
-        }
-        
-        return m_out.booleanFalse;
-    }
-    
-    struct ArgumentsLength {
-        ArgumentsLength()
-            : isKnown(false)
-            , known(UINT_MAX)
-            , value(nullptr)
-        {
-        }
-        
-        bool isKnown;
-        unsigned known;
-        LValue value;
-    };
-    ArgumentsLength getArgumentsLength(InlineCallFrame* inlineCallFrame)
-    {
-        ArgumentsLength length;
-
-        if (inlineCallFrame &amp;&amp; !inlineCallFrame-&gt;isVarargs()) {
-            length.known = inlineCallFrame-&gt;arguments.size() - 1;
-            length.isKnown = true;
-            length.value = m_out.constInt32(length.known);
-        } else {
-            length.known = UINT_MAX;
-            length.isKnown = false;
-            
-            VirtualRegister argumentCountRegister;
-            if (!inlineCallFrame)
-                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
-            else
-                argumentCountRegister = inlineCallFrame-&gt;argumentCountRegister;
-            length.value = m_out.sub(m_out.load32(payloadFor(argumentCountRegister)), m_out.int32One);
-        }
-        
-        return length;
-    }
-    
-    ArgumentsLength getArgumentsLength()
-    {
-        return getArgumentsLength(m_node-&gt;origin.semantic.inlineCallFrame);
-    }
-    
-    LValue getCurrentCallee()
-    {
-        if (InlineCallFrame* frame = m_node-&gt;origin.semantic.inlineCallFrame) {
-            if (frame-&gt;isClosureCall)
-                return m_out.loadPtr(addressFor(frame-&gt;calleeRecovery.virtualRegister()));
-            return weakPointer(frame-&gt;calleeRecovery.constant().asCell());
-        }
-        return m_out.loadPtr(addressFor(JSStack::Callee));
-    }
-    
-    LValue getArgumentsStart(InlineCallFrame* inlineCallFrame)
-    {
-        VirtualRegister start = AssemblyHelpers::argumentsStart(inlineCallFrame);
-        return addressFor(start).value();
-    }
-    
-    LValue getArgumentsStart()
-    {
-        return getArgumentsStart(m_node-&gt;origin.semantic.inlineCallFrame);
-    }
-    
-    template&lt;typename Functor&gt;
-    void checkStructure(
-        LValue structureDiscriminant, const FormattedValue&amp; formattedValue, ExitKind exitKind,
-        const StructureSet&amp; set, const Functor&amp; weakStructureDiscriminant)
-    {
-        if (set.isEmpty()) {
-            terminate(exitKind);
-            return;
-        }
-
-        if (set.size() == 1) {
-            speculate(
-                exitKind, formattedValue, 0,
-                m_out.notEqual(structureDiscriminant, weakStructureDiscriminant(set[0])));
-            return;
-        }
-        
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;checkStructure continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
-        for (unsigned i = 0; i &lt; set.size() - 1; ++i) {
-            LBasicBlock nextStructure = FTL_NEW_BLOCK(m_out, (&quot;checkStructure nextStructure&quot;));
-            m_out.branch(
-                m_out.equal(structureDiscriminant, weakStructureDiscriminant(set[i])),
-                unsure(continuation), unsure(nextStructure));
-            m_out.appendTo(nextStructure);
-        }
-        
-        speculate(
-            exitKind, formattedValue, 0,
-            m_out.notEqual(structureDiscriminant, weakStructureDiscriminant(set.last())));
-        
-        m_out.jump(continuation);
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    LValue numberOrNotCellToInt32(Edge edge, LValue value)
-    {
-        LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 int case&quot;));
-        LBasicBlock notIntCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 not int case&quot;));
-        LBasicBlock doubleCase = 0;
-        LBasicBlock notNumberCase = 0;
-        if (edge.useKind() == NotCellUse) {
-            doubleCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 double case&quot;));
-            notNumberCase = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 not number case&quot;));
-        }
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ValueToInt32 continuation&quot;));
-        
-        Vector&lt;ValueFromBlock&gt; results;
-        
-        m_out.branch(isNotInt32(value), unsure(notIntCase), unsure(intCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(intCase, notIntCase);
-        results.append(m_out.anchor(unboxInt32(value)));
-        m_out.jump(continuation);
-        
-        if (edge.useKind() == NumberUse) {
-            m_out.appendTo(notIntCase, continuation);
-            FTL_TYPE_CHECK(jsValueValue(value), edge, SpecBytecodeNumber, isCellOrMisc(value));
-            results.append(m_out.anchor(doubleToInt32(unboxDouble(value))));
-            m_out.jump(continuation);
-        } else {
-            m_out.appendTo(notIntCase, doubleCase);
-            m_out.branch(
-                isCellOrMisc(value, provenType(edge)), unsure(notNumberCase), unsure(doubleCase));
-            
-            m_out.appendTo(doubleCase, notNumberCase);
-            results.append(m_out.anchor(doubleToInt32(unboxDouble(value))));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(notNumberCase, continuation);
-            
-            FTL_TYPE_CHECK(jsValueValue(value), edge, ~SpecCell, isCell(value));
-            
-            LValue specialResult = m_out.select(
-                m_out.equal(value, m_out.constInt64(JSValue::encode(jsBoolean(true)))),
-                m_out.int32One, m_out.int32Zero);
-            results.append(m_out.anchor(specialResult));
-            m_out.jump(continuation);
-        }
-        
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.int32, results);
-    }
-
-    void checkInferredType(Edge edge, LValue value, const InferredType::Descriptor&amp; type)
-    {
-        // This cannot use FTL_TYPE_CHECK or typeCheck() because it is called partially, as in a node like:
-        //
-        //     MultiPutByOffset(...)
-        //
-        // may be lowered to:
-        //
-        //     switch (object-&gt;structure) {
-        //     case 42:
-        //         checkInferredType(..., type1);
-        //         ...
-        //         break;
-        //     case 43:
-        //         checkInferredType(..., type2);
-        //         ...
-        //         break;
-        //     }
-        //
-        // where type1 and type2 are different. Using typeCheck() would mean that the edge would be
-        // filtered by type1 &amp; type2, instead of type1 | type2.
-        
-        switch (type.kind()) {
-        case InferredType::Bottom:
-            speculate(BadType, jsValueValue(value), edge.node(), m_out.booleanTrue);
-            return;
-
-        case InferredType::Boolean:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotBoolean(value, provenType(edge)));
-            return;
-
-        case InferredType::Other:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotOther(value, provenType(edge)));
-            return;
-
-        case InferredType::Int32:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotInt32(value, provenType(edge)));
-            return;
-
-        case InferredType::Number:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotNumber(value, provenType(edge)));
-            return;
-
-        case InferredType::String:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
-            speculate(BadType, jsValueValue(value), edge.node(), isNotString(value, provenType(edge)));
-            return;
-
-        case InferredType::Symbol:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
-            speculate(BadType, jsValueValue(value), edge.node(), isNotSymbol(value, provenType(edge)));
-            return;
-
-        case InferredType::ObjectWithStructure:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
-            if (!abstractValue(edge).m_structure.isSubsetOf(StructureSet(type.structure()))) {
-                speculate(
-                    BadType, jsValueValue(value), edge.node(),
-                    m_out.notEqual(
-                        m_out.load32(value, m_heaps.JSCell_structureID),
-                        weakStructureID(type.structure())));
-            }
-            return;
-
-        case InferredType::ObjectWithStructureOrOther: {
-            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectWithStructureOrOther cell case&quot;));
-            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectWithStructureOrOther not cell case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectWithStructureOrOther continuation&quot;));
-
-            m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase));
-
-            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
-
-            if (!abstractValue(edge).m_structure.isSubsetOf(StructureSet(type.structure()))) {
-                speculate(
-                    BadType, jsValueValue(value), edge.node(),
-                    m_out.notEqual(
-                        m_out.load32(value, m_heaps.JSCell_structureID),
-                        weakStructureID(type.structure())));
-            }
-
-            m_out.jump(continuation);
-
-            m_out.appendTo(notCellCase, continuation);
-
-            speculate(
-                BadType, jsValueValue(value), edge.node(),
-                isNotOther(value, provenType(edge) &amp; ~SpecCell));
-
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            return;
-        }
-
-        case InferredType::Object:
-            speculate(BadType, jsValueValue(value), edge.node(), isNotCell(value, provenType(edge)));
-            speculate(BadType, jsValueValue(value), edge.node(), isNotObject(value, provenType(edge)));
-            return;
-
-        case InferredType::ObjectOrOther: {
-            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectOrOther cell case&quot;));
-            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectOrOther not cell case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;checkInferredType ObjectOrOther continuation&quot;));
-
-            m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase));
-
-            LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase);
-
-            speculate(
-                BadType, jsValueValue(value), edge.node(),
-                isNotObject(value, provenType(edge) &amp; SpecCell));
-
-            m_out.jump(continuation);
-
-            m_out.appendTo(notCellCase, continuation);
-
-            speculate(
-                BadType, jsValueValue(value), edge.node(),
-                isNotOther(value, provenType(edge) &amp; ~SpecCell));
-
-            m_out.jump(continuation);
-
-            m_out.appendTo(continuation, lastNext);
-            return;
-        }
-
-        case InferredType::Top:
-            return;
-        }
-
-        DFG_CRASH(m_graph, m_node, &quot;Bad inferred type&quot;);
-    }
-    
-    LValue loadProperty(LValue storage, unsigned identifierNumber, PropertyOffset offset)
-    {
-        return m_out.load64(addressOfProperty(storage, identifierNumber, offset));
-    }
-    
-    void storeProperty(
-        LValue value, LValue storage, unsigned identifierNumber, PropertyOffset offset)
-    {
-        m_out.store64(value, addressOfProperty(storage, identifierNumber, offset));
-    }
-    
-    TypedPointer addressOfProperty(
-        LValue storage, unsigned identifierNumber, PropertyOffset offset)
-    {
-        return m_out.address(
-            m_heaps.properties[identifierNumber], storage, offsetRelativeToBase(offset));
-    }
-    
-    LValue storageForTransition(
-        LValue object, PropertyOffset offset,
-        Structure* previousStructure, Structure* nextStructure)
-    {
-        if (isInlineOffset(offset))
-            return object;
-        
-        if (previousStructure-&gt;outOfLineCapacity() == nextStructure-&gt;outOfLineCapacity())
-            return loadButterflyWithBarrier(object);
-        
-        LValue result;
-        if (!previousStructure-&gt;outOfLineCapacity())
-            result = allocatePropertyStorage(object, previousStructure);
-        else {
-            result = reallocatePropertyStorage(
-                object, loadButterflyWithBarrier(object),
-                previousStructure, nextStructure);
-        }
-        
-        emitStoreBarrier(object);
-        
-        return result;
-    }
-    
-    LValue allocatePropertyStorage(LValue object, Structure* previousStructure)
-    {
-        if (previousStructure-&gt;couldHaveIndexingHeader()) {
-            return vmCall(
-                m_out.intPtr,
-                m_out.operation(
-                    operationReallocateButterflyToHavePropertyStorageWithInitialCapacity),
-                m_callFrame, object);
-        }
-        
-        LValue result = allocatePropertyStorageWithSizeImpl(initialOutOfLineCapacity);
-        m_out.storePtr(result, object, m_heaps.JSObject_butterfly);
-        return result;
-    }
-    
-    LValue reallocatePropertyStorage(
-        LValue object, LValue oldStorage, Structure* previous, Structure* next)
-    {
-        size_t oldSize = previous-&gt;outOfLineCapacity();
-        size_t newSize = oldSize * outOfLineGrowthFactor; 
-
-        ASSERT_UNUSED(next, newSize == next-&gt;outOfLineCapacity());
-        
-        if (previous-&gt;couldHaveIndexingHeader()) {
-            LValue newAllocSize = m_out.constIntPtr(newSize);                    
-            return vmCall(m_out.intPtr, m_out.operation(operationReallocateButterflyToGrowPropertyStorage), m_callFrame, object, newAllocSize);
-        }
-        
-        LValue result = allocatePropertyStorageWithSizeImpl(newSize);
-
-        ptrdiff_t headerSize = -sizeof(IndexingHeader) - sizeof(void*);
-        ptrdiff_t endStorage = headerSize - static_cast&lt;ptrdiff_t&gt;(oldSize * sizeof(JSValue));
-
-        for (ptrdiff_t offset = headerSize; offset &gt; endStorage; offset -= sizeof(void*)) {
-            LValue loaded = 
-                m_out.loadPtr(m_out.address(m_heaps.properties.atAnyNumber(), oldStorage, offset));
-            m_out.storePtr(loaded, m_out.address(m_heaps.properties.atAnyNumber(), result, offset));
-        }
-        
-        m_out.storePtr(result, m_out.address(object, m_heaps.JSObject_butterfly));
-        
-        return result;
-    }
-    
-    LValue allocatePropertyStorageWithSizeImpl(size_t sizeInValues)
-    {
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;allocatePropertyStorageWithSizeImpl slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;allocatePropertyStorageWithSizeImpl continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        LValue endOfStorage = allocateBasicStorageAndGetEnd(
-            m_out.constIntPtr(sizeInValues * sizeof(JSValue)), slowPath);
-        
-        ValueFromBlock fastButterfly = m_out.anchor(
-            m_out.add(m_out.constIntPtr(sizeof(IndexingHeader)), endOfStorage));
-        
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-        
-        LValue slowButterflyValue;
-        if (sizeInValues == initialOutOfLineCapacity) {
-            slowButterflyValue = lazySlowPath(
-                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                    return createLazyCallGenerator(
-                        operationAllocatePropertyStorageWithInitialCapacity,
-                        locations[0].directGPR());
-                });
-        } else {
-            slowButterflyValue = lazySlowPath(
-                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                    return createLazyCallGenerator(
-                        operationAllocatePropertyStorage, locations[0].directGPR(),
-                        CCallHelpers::TrustedImmPtr(sizeInValues));
-                });
-        }
-        ValueFromBlock slowButterfly = m_out.anchor(slowButterflyValue);
-        
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        
-        return m_out.phi(m_out.intPtr, fastButterfly, slowButterfly);
-    }
-    
-    LValue getById(LValue base)
-    {
-        Node* node = m_node;
-        UniquedStringImpl* uid = m_graph.identifiers()[node-&gt;identifierNumber()];
-
-        B3::PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-        patchpoint-&gt;appendSomeRegister(base);
-
-        // FIXME: If this is a GetByIdFlush, we might get some performance boost if we claim that it
-        // clobbers volatile registers late. It's not necessary for correctness, though, since the
-        // IC code is super smart about saving registers.
-        // https://bugs.webkit.org/show_bug.cgi?id=152848
-        
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                CallSiteIndex callSiteIndex =
-                    state-&gt;jitCode-&gt;common.addUniqueCallSiteIndex(node-&gt;origin.semantic);
-
-                // This is the direct exit target for operation calls.
-                Box&lt;CCallHelpers::JumpList&gt; exceptions =
-                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
-
-                // This is the exit for call IC's created by the getById for getters. We don't have
-                // to do anything weird other than call this, since it will associate the exit with
-                // the callsite index.
-                exceptionHandle-&gt;scheduleExitCreationForUnwind(params, callSiteIndex);
-
-                auto generator = Box&lt;JITGetByIdGenerator&gt;::create(
-                    jit.codeBlock(), node-&gt;origin.semantic, callSiteIndex,
-                    params.unavailableRegisters(), JSValueRegs(params[1].gpr()),
-                    JSValueRegs(params[0].gpr()));
-
-                generator-&gt;generateFastPath(jit);
-                CCallHelpers::Label done = jit.label();
-
-                params.addLatePath(
-                    [=] (CCallHelpers&amp; jit) {
-                        AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                        generator-&gt;slowPathJump().link(&amp;jit);
-                        CCallHelpers::Label slowPathBegin = jit.label();
-                        CCallHelpers::Call slowPathCall = callOperation(
-                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
-                            exceptions.get(), operationGetByIdOptimize, params[0].gpr(),
-                            CCallHelpers::TrustedImmPtr(generator-&gt;stubInfo()), params[1].gpr(),
-                            CCallHelpers::TrustedImmPtr(uid)).call();
-                        jit.jump().linkTo(done, &amp;jit);
-
-                        generator-&gt;reportSlowPathCall(slowPathBegin, slowPathCall);
-
-                        jit.addLinkTask(
-                            [=] (LinkBuffer&amp; linkBuffer) {
-                                generator-&gt;finalize(linkBuffer);
-                            });
-                    });
-            });
-
-        return patchpoint;
-    }
-
-    LValue loadButterflyWithBarrier(LValue object)
-    {
-        return copyBarrier(
-            object, m_out.loadPtr(object, m_heaps.JSObject_butterfly), operationGetButterfly);
-    }
-    
-    LValue loadVectorWithBarrier(LValue object)
-    {
-        LValue fastResultValue = m_out.loadPtr(object, m_heaps.JSArrayBufferView_vector);
-        return copyBarrier(
-            fastResultValue,
-            [&amp;] () -&gt; LValue {
-                LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;loadVectorWithBarrier slow path&quot;));
-                LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;loadVectorWithBarrier continuation&quot;));
-
-                ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-                m_out.branch(isFastTypedArray(object), rarely(slowPath), usually(continuation));
-
-                LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
-
-                LValue slowResultValue = lazySlowPath(
-                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                        return createLazyCallGenerator(
-                            operationGetArrayBufferVector, locations[0].directGPR(),
-                            locations[1].directGPR());
-                    }, object);
-                ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-                m_out.jump(continuation);
-
-                m_out.appendTo(continuation, lastNext);
-                return m_out.phi(m_out.intPtr, fastResult, slowResult);
-            });
-    }
-
-    LValue copyBarrier(LValue object, LValue pointer, P_JITOperation_EC slowPathFunction)
-    {
-        return copyBarrier(
-            pointer,
-            [&amp;] () -&gt; LValue {
-                return lazySlowPath(
-                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                        return createLazyCallGenerator(
-                            slowPathFunction, locations[0].directGPR(), locations[1].directGPR());
-                    }, object);
-            });
-    }
-
-    template&lt;typename Functor&gt;
-    LValue copyBarrier(LValue pointer, const Functor&amp; functor)
-    {
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;copyBarrier slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;copyBarrier continuation&quot;));
-
-        ValueFromBlock fastResult = m_out.anchor(pointer);
-        m_out.branch(isInToSpace(pointer), usually(continuation), rarely(slowPath));
-
-        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
-
-        ValueFromBlock slowResult = m_out.anchor(functor());
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.intPtr, fastResult, slowResult);
-    }
-
-    LValue isInToSpace(LValue pointer)
-    {
-        return m_out.testIsZeroPtr(pointer, m_out.constIntPtr(CopyBarrierBase::spaceBits));
-    }
-
-    LValue loadButterflyReadOnly(LValue object)
-    {
-        return removeSpaceBits(m_out.loadPtr(object, m_heaps.JSObject_butterfly));
-    }
-
-    LValue loadVectorReadOnly(LValue object)
-    {
-        LValue fastResultValue = m_out.loadPtr(object, m_heaps.JSArrayBufferView_vector);
-
-        LBasicBlock possiblyFromSpace = FTL_NEW_BLOCK(m_out, (&quot;loadVectorReadOnly possibly from space&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;loadVectorReadOnly continuation&quot;));
-
-        ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-
-        m_out.branch(isInToSpace(fastResultValue), usually(continuation), rarely(possiblyFromSpace));
-
-        LBasicBlock lastNext = m_out.appendTo(possiblyFromSpace, continuation);
-
-        LValue slowResultValue = m_out.select(
-            isFastTypedArray(object), removeSpaceBits(fastResultValue), fastResultValue);
-        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        
-        return m_out.phi(m_out.intPtr, fastResult, slowResult);
-    }
-
-    LValue removeSpaceBits(LValue storage)
-    {
-        return m_out.bitAnd(
-            storage, m_out.constIntPtr(~static_cast&lt;intptr_t&gt;(CopyBarrierBase::spaceBits)));
-    }
-
-    LValue isFastTypedArray(LValue object)
-    {
-        return m_out.equal(
-            m_out.load32(object, m_heaps.JSArrayBufferView_mode),
-            m_out.constInt32(FastTypedArray));
-    }
-    
-    TypedPointer baseIndex(IndexedAbstractHeap&amp; heap, LValue storage, LValue index, Edge edge, ptrdiff_t offset = 0)
-    {
-        return m_out.baseIndex(
-            heap, storage, m_out.zeroExtPtr(index), provenValue(edge), offset);
-    }
-
-    template&lt;typename IntFunctor, typename DoubleFunctor&gt;
-    void compare(
-        const IntFunctor&amp; intFunctor, const DoubleFunctor&amp; doubleFunctor,
-        S_JITOperation_EJJ helperFunction)
-    {
-        if (m_node-&gt;isBinaryUseKind(Int32Use)) {
-            LValue left = lowInt32(m_node-&gt;child1());
-            LValue right = lowInt32(m_node-&gt;child2());
-            setBoolean(intFunctor(left, right));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(Int52RepUse)) {
-            Int52Kind kind;
-            LValue left = lowWhicheverInt52(m_node-&gt;child1(), kind);
-            LValue right = lowInt52(m_node-&gt;child2(), kind);
-            setBoolean(intFunctor(left, right));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(DoubleRepUse)) {
-            LValue left = lowDouble(m_node-&gt;child1());
-            LValue right = lowDouble(m_node-&gt;child2());
-            setBoolean(doubleFunctor(left, right));
-            return;
-        }
-        
-        if (m_node-&gt;isBinaryUseKind(UntypedUse)) {
-            nonSpeculativeCompare(intFunctor, helperFunction);
-            return;
-        }
-        
-        DFG_CRASH(m_graph, m_node, &quot;Bad use kinds&quot;);
-    }
-    
-    void compareEqObjectOrOtherToObject(Edge leftChild, Edge rightChild)
-    {
-        LValue rightCell = lowCell(rightChild);
-        LValue leftValue = lowJSValue(leftChild, ManualOperandSpeculation);
-        
-        speculateTruthyObject(rightChild, rightCell, SpecObject);
-        
-        LBasicBlock leftCellCase = FTL_NEW_BLOCK(m_out, (&quot;CompareEqObjectOrOtherToObject left cell case&quot;));
-        LBasicBlock leftNotCellCase = FTL_NEW_BLOCK(m_out, (&quot;CompareEqObjectOrOtherToObject left not cell case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareEqObjectOrOtherToObject continuation&quot;));
-        
-        m_out.branch(
-            isCell(leftValue, provenType(leftChild)),
-            unsure(leftCellCase), unsure(leftNotCellCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(leftCellCase, leftNotCellCase);
-        speculateTruthyObject(leftChild, leftValue, SpecObject | (~SpecCell));
-        ValueFromBlock cellResult = m_out.anchor(m_out.equal(rightCell, leftValue));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(leftNotCellCase, continuation);
-        FTL_TYPE_CHECK(
-            jsValueValue(leftValue), leftChild, SpecOther | SpecCell, isNotOther(leftValue));
-        ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(m_out.phi(m_out.boolean, cellResult, notCellResult));
-    }
-    
-    void speculateTruthyObject(Edge edge, LValue cell, SpeculatedType filter)
-    {
-        if (masqueradesAsUndefinedWatchpointIsStillValid()) {
-            FTL_TYPE_CHECK(jsValueValue(cell), edge, filter, isNotObject(cell));
-            return;
-        }
-        
-        FTL_TYPE_CHECK(jsValueValue(cell), edge, filter, isNotObject(cell));
-        speculate(
-            BadType, jsValueValue(cell), edge.node(),
-            m_out.testNonZero32(
-                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoFlags),
-                m_out.constInt32(MasqueradesAsUndefined)));
-    }
-
-    template&lt;typename IntFunctor&gt;
-    void nonSpeculativeCompare(const IntFunctor&amp; intFunctor, S_JITOperation_EJJ helperFunction)
-    {
-        LValue left = lowJSValue(m_node-&gt;child1());
-        LValue right = lowJSValue(m_node-&gt;child2());
-        
-        LBasicBlock leftIsInt = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped left is int&quot;));
-        LBasicBlock fastPath = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped fast path&quot;));
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;CompareEq untyped continuation&quot;));
-        
-        m_out.branch(isNotInt32(left), rarely(slowPath), usually(leftIsInt));
-        
-        LBasicBlock lastNext = m_out.appendTo(leftIsInt, fastPath);
-        m_out.branch(isNotInt32(right), rarely(slowPath), usually(fastPath));
-        
-        m_out.appendTo(fastPath, slowPath);
-        ValueFromBlock fastResult = m_out.anchor(intFunctor(unboxInt32(left), unboxInt32(right)));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-        ValueFromBlock slowResult = m_out.anchor(m_out.notNull(vmCall(
-            m_out.intPtr, m_out.operation(helperFunction), m_callFrame, left, right)));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        setBoolean(m_out.phi(m_out.boolean, fastResult, slowResult));
-    }
-
-    LValue stringsEqual(LValue leftJSString, LValue rightJSString)
-    {
-        LBasicBlock notTriviallyUnequalCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual not trivially unequal case&quot;));
-        LBasicBlock notEmptyCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual not empty case&quot;));
-        LBasicBlock leftReadyCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual left ready case&quot;));
-        LBasicBlock rightReadyCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual right ready case&quot;));
-        LBasicBlock left8BitCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual left 8-bit case&quot;));
-        LBasicBlock right8BitCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual right 8-bit case&quot;));
-        LBasicBlock loop = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual loop&quot;));
-        LBasicBlock bytesEqual = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual bytes equal&quot;));
-        LBasicBlock trueCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual true case&quot;));
-        LBasicBlock falseCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual false case&quot;));
-        LBasicBlock slowCase = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual slow case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;stringsEqual continuation&quot;));
-
-        LValue length = m_out.load32(leftJSString, m_heaps.JSString_length);
-
-        m_out.branch(
-            m_out.notEqual(length, m_out.load32(rightJSString, m_heaps.JSString_length)),
-            unsure(falseCase), unsure(notTriviallyUnequalCase));
-
-        LBasicBlock lastNext = m_out.appendTo(notTriviallyUnequalCase, notEmptyCase);
-
-        m_out.branch(m_out.isZero32(length), unsure(trueCase), unsure(notEmptyCase));
-
-        m_out.appendTo(notEmptyCase, leftReadyCase);
-
-        LValue left = m_out.loadPtr(leftJSString, m_heaps.JSString_value);
-        LValue right = m_out.loadPtr(rightJSString, m_heaps.JSString_value);
-
-        m_out.branch(m_out.notNull(left), usually(leftReadyCase), rarely(slowCase));
-
-        m_out.appendTo(leftReadyCase, rightReadyCase);
-        
-        m_out.branch(m_out.notNull(right), usually(rightReadyCase), rarely(slowCase));
-
-        m_out.appendTo(rightReadyCase, left8BitCase);
-
-        m_out.branch(
-            m_out.testIsZero32(
-                m_out.load32(left, m_heaps.StringImpl_hashAndFlags),
-                m_out.constInt32(StringImpl::flagIs8Bit())),
-            unsure(slowCase), unsure(left8BitCase));
-
-        m_out.appendTo(left8BitCase, right8BitCase);
-
-        m_out.branch(
-            m_out.testIsZero32(
-                m_out.load32(right, m_heaps.StringImpl_hashAndFlags),
-                m_out.constInt32(StringImpl::flagIs8Bit())),
-            unsure(slowCase), unsure(right8BitCase));
-
-        m_out.appendTo(right8BitCase, loop);
-
-        LValue leftData = m_out.loadPtr(left, m_heaps.StringImpl_data);
-        LValue rightData = m_out.loadPtr(right, m_heaps.StringImpl_data);
-
-        ValueFromBlock indexAtStart = m_out.anchor(length);
-
-        m_out.jump(loop);
-
-        m_out.appendTo(loop, bytesEqual);
-
-        LValue indexAtLoopTop = m_out.phi(m_out.int32, indexAtStart);
-        LValue indexInLoop = m_out.sub(indexAtLoopTop, m_out.int32One);
-
-        LValue leftByte = m_out.load8ZeroExt32(
-            m_out.baseIndex(m_heaps.characters8, leftData, m_out.zeroExtPtr(indexInLoop)));
-        LValue rightByte = m_out.load8ZeroExt32(
-            m_out.baseIndex(m_heaps.characters8, rightData, m_out.zeroExtPtr(indexInLoop)));
-
-        m_out.branch(m_out.notEqual(leftByte, rightByte), unsure(falseCase), unsure(bytesEqual));
-
-        m_out.appendTo(bytesEqual, trueCase);
-
-        ValueFromBlock indexForNextIteration = m_out.anchor(indexInLoop);
-        m_out.addIncomingToPhi(indexAtLoopTop, indexForNextIteration);
-        m_out.branch(m_out.notZero32(indexInLoop), unsure(loop), unsure(trueCase));
-
-        m_out.appendTo(trueCase, falseCase);
-        
-        ValueFromBlock trueResult = m_out.anchor(m_out.booleanTrue);
-        m_out.jump(continuation);
-
-        m_out.appendTo(falseCase, slowCase);
-
-        ValueFromBlock falseResult = m_out.anchor(m_out.booleanFalse);
-        m_out.jump(continuation);
-
-        m_out.appendTo(slowCase, continuation);
-
-        LValue slowResultValue = vmCall(
-            m_out.int64, m_out.operation(operationCompareStringEq), m_callFrame,
-            leftJSString, rightJSString);
-        ValueFromBlock slowResult = m_out.anchor(unboxBoolean(slowResultValue));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.boolean, trueResult, falseResult, slowResult);
-    }
-
-    enum ScratchFPRUsage {
-        DontNeedScratchFPR,
-        NeedScratchFPR
-    };
-    template&lt;typename BinaryArithOpGenerator, ScratchFPRUsage scratchFPRUsage = DontNeedScratchFPR&gt;
-    void emitBinarySnippet(J_JITOperation_EJJ slowPathFunction)
-    {
-        Node* node = m_node;
-        
-        LValue left = lowJSValue(node-&gt;child1());
-        LValue right = lowJSValue(node-&gt;child2());
-
-        SnippetOperand leftOperand(m_state.forNode(node-&gt;child1()).resultType());
-        SnippetOperand rightOperand(m_state.forNode(node-&gt;child2()).resultType());
-            
-        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-        patchpoint-&gt;appendSomeRegister(left);
-        patchpoint-&gt;appendSomeRegister(right);
-        patchpoint-&gt;append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
-        patchpoint-&gt;append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-        patchpoint-&gt;numGPScratchRegisters = 1;
-        patchpoint-&gt;numFPScratchRegisters = 2;
-        if (scratchFPRUsage == NeedScratchFPR)
-            patchpoint-&gt;numFPScratchRegisters++;
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-
-                Box&lt;CCallHelpers::JumpList&gt; exceptions =
-                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
-
-                auto generator = Box&lt;BinaryArithOpGenerator&gt;::create(
-                    leftOperand, rightOperand, JSValueRegs(params[0].gpr()),
-                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()),
-                    params.fpScratch(0), params.fpScratch(1), params.gpScratch(0),
-                    scratchFPRUsage == NeedScratchFPR ? params.fpScratch(2) : InvalidFPRReg);
-
-                generator-&gt;generateFastPath(jit);
-
-                if (generator-&gt;didEmitFastPath()) {
-                    generator-&gt;endJumpList().link(&amp;jit);
-                    CCallHelpers::Label done = jit.label();
-                    
-                    params.addLatePath(
-                        [=] (CCallHelpers&amp; jit) {
-                            AllowMacroScratchRegisterUsage allowScratch(jit);
-                            
-                            generator-&gt;slowPathJumpList().link(&amp;jit);
-                            callOperation(
-                                *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
-                                exceptions.get(), slowPathFunction, params[0].gpr(),
-                                params[1].gpr(), params[2].gpr());
-                            jit.jump().linkTo(done, &amp;jit);
-                        });
-                } else {
-                    callOperation(
-                        *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
-                        exceptions.get(), slowPathFunction, params[0].gpr(), params[1].gpr(),
-                        params[2].gpr());
-                }
-            });
-
-        setJSValue(patchpoint);
-    }
-
-    template&lt;typename BinaryBitOpGenerator&gt;
-    void emitBinaryBitOpSnippet(J_JITOperation_EJJ slowPathFunction)
-    {
-        Node* node = m_node;
-        
-        // FIXME: Make this do exceptions.
-        // https://bugs.webkit.org/show_bug.cgi?id=151686
-            
-        LValue left = lowJSValue(node-&gt;child1());
-        LValue right = lowJSValue(node-&gt;child2());
-
-        SnippetOperand leftOperand(m_state.forNode(node-&gt;child1()).resultType());
-        SnippetOperand rightOperand(m_state.forNode(node-&gt;child2()).resultType());
-            
-        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-        patchpoint-&gt;appendSomeRegister(left);
-        patchpoint-&gt;appendSomeRegister(right);
-        patchpoint-&gt;append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
-        patchpoint-&gt;append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-        patchpoint-&gt;numGPScratchRegisters = 1;
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-                    
-                Box&lt;CCallHelpers::JumpList&gt; exceptions =
-                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
-                    
-                auto generator = Box&lt;BinaryBitOpGenerator&gt;::create(
-                    leftOperand, rightOperand, JSValueRegs(params[0].gpr()),
-                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()), params.gpScratch(0));
-
-                generator-&gt;generateFastPath(jit);
-                generator-&gt;endJumpList().link(&amp;jit);
-                CCallHelpers::Label done = jit.label();
-
-                params.addLatePath(
-                    [=] (CCallHelpers&amp; jit) {
-                        AllowMacroScratchRegisterUsage allowScratch(jit);
-                            
-                        generator-&gt;slowPathJumpList().link(&amp;jit);
-                        callOperation(
-                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
-                            exceptions.get(), slowPathFunction, params[0].gpr(),
-                            params[1].gpr(), params[2].gpr());
-                        jit.jump().linkTo(done, &amp;jit);
-                    });
-            });
-
-        setJSValue(patchpoint);
-    }
-
-    void emitRightShiftSnippet(JITRightShiftGenerator::ShiftType shiftType)
-    {
-        Node* node = m_node;
-        
-        // FIXME: Make this do exceptions.
-        // https://bugs.webkit.org/show_bug.cgi?id=151686
-            
-        LValue left = lowJSValue(node-&gt;child1());
-        LValue right = lowJSValue(node-&gt;child2());
-
-        SnippetOperand leftOperand(m_state.forNode(node-&gt;child1()).resultType());
-        SnippetOperand rightOperand(m_state.forNode(node-&gt;child2()).resultType());
-            
-        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
-        patchpoint-&gt;appendSomeRegister(left);
-        patchpoint-&gt;appendSomeRegister(right);
-        patchpoint-&gt;append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
-        patchpoint-&gt;append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(patchpoint);
-        patchpoint-&gt;numGPScratchRegisters = 1;
-        patchpoint-&gt;numFPScratchRegisters = 1;
-        patchpoint-&gt;clobber(RegisterSet::macroScratchRegisters());
-        State* state = &amp;m_ftlState;
-        patchpoint-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                AllowMacroScratchRegisterUsage allowScratch(jit);
-                    
-                Box&lt;CCallHelpers::JumpList&gt; exceptions =
-                    exceptionHandle-&gt;scheduleExitCreation(params)-&gt;jumps(jit);
-                    
-                auto generator = Box&lt;JITRightShiftGenerator&gt;::create(
-                    leftOperand, rightOperand, JSValueRegs(params[0].gpr()),
-                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()),
-                    params.fpScratch(0), params.gpScratch(0), InvalidFPRReg, shiftType);
-
-                generator-&gt;generateFastPath(jit);
-                generator-&gt;endJumpList().link(&amp;jit);
-                CCallHelpers::Label done = jit.label();
-
-                params.addLatePath(
-                    [=] (CCallHelpers&amp; jit) {
-                        AllowMacroScratchRegisterUsage allowScratch(jit);
-                            
-                        generator-&gt;slowPathJumpList().link(&amp;jit);
-
-                        J_JITOperation_EJJ slowPathFunction =
-                            shiftType == JITRightShiftGenerator::SignedShift
-                            ? operationValueBitRShift : operationValueBitURShift;
-                        
-                        callOperation(
-                            *state, params.unavailableRegisters(), jit, node-&gt;origin.semantic,
-                            exceptions.get(), slowPathFunction, params[0].gpr(),
-                            params[1].gpr(), params[2].gpr());
-                        jit.jump().linkTo(done, &amp;jit);
-                    });
-            });
-
-        setJSValue(patchpoint);
-    }
-
-    LValue allocateCell(LValue allocator, LBasicBlock slowPath)
-    {
-        LBasicBlock success = FTL_NEW_BLOCK(m_out, (&quot;object allocation success&quot;));
-    
-        LValue result;
-        LValue condition;
-        if (Options::forceGCSlowPaths()) {
-            result = m_out.intPtrZero;
-            condition = m_out.booleanFalse;
-        } else {
-            result = m_out.loadPtr(
-                allocator, m_heaps.MarkedAllocator_freeListHead);
-            condition = m_out.notNull(result);
-        }
-        m_out.branch(condition, usually(success), rarely(slowPath));
-        
-        m_out.appendTo(success);
-        
-        m_out.storePtr(
-            m_out.loadPtr(result, m_heaps.JSCell_freeListNext),
-            allocator, m_heaps.MarkedAllocator_freeListHead);
-
-        return result;
-    }
-    
-    void storeStructure(LValue object, Structure* structure)
-    {
-        m_out.store32(m_out.constInt32(structure-&gt;id()), object, m_heaps.JSCell_structureID);
-        m_out.store32(
-            m_out.constInt32(structure-&gt;objectInitializationBlob()),
-            object, m_heaps.JSCell_usefulBytes);
-    }
-
-    LValue allocateCell(LValue allocator, Structure* structure, LBasicBlock slowPath)
-    {
-        LValue result = allocateCell(allocator, slowPath);
-        storeStructure(result, structure);
-        return result;
-    }
-
-    LValue allocateObject(
-        LValue allocator, Structure* structure, LValue butterfly, LBasicBlock slowPath)
-    {
-        LValue result = allocateCell(allocator, structure, slowPath);
-        m_out.storePtr(butterfly, result, m_heaps.JSObject_butterfly);
-        return result;
-    }
-    
-    template&lt;typename ClassType&gt;
-    LValue allocateObject(
-        size_t size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
-    {
-        MarkedAllocator* allocator = &amp;vm().heap.allocatorForObjectOfType&lt;ClassType&gt;(size);
-        return allocateObject(m_out.constIntPtr(allocator), structure, butterfly, slowPath);
-    }
-    
-    template&lt;typename ClassType&gt;
-    LValue allocateObject(Structure* structure, LValue butterfly, LBasicBlock slowPath)
-    {
-        return allocateObject&lt;ClassType&gt;(
-            ClassType::allocationSize(0), structure, butterfly, slowPath);
-    }
-    
-    template&lt;typename ClassType&gt;
-    LValue allocateVariableSizedObject(
-        LValue size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
-    {
-        static_assert(!(MarkedSpace::preciseStep &amp; (MarkedSpace::preciseStep - 1)), &quot;MarkedSpace::preciseStep must be a power of two.&quot;);
-        static_assert(!(MarkedSpace::impreciseStep &amp; (MarkedSpace::impreciseStep - 1)), &quot;MarkedSpace::impreciseStep must be a power of two.&quot;);
-
-        LValue subspace = m_out.constIntPtr(&amp;vm().heap.subspaceForObjectOfType&lt;ClassType&gt;());
-        
-        LBasicBlock smallCaseBlock = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject small case&quot;));
-        LBasicBlock largeOrOversizeCaseBlock = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject large or oversize case&quot;));
-        LBasicBlock largeCaseBlock = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject large case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;allocateVariableSizedObject continuation&quot;));
-        
-        LValue uproundedSize = m_out.add(size, m_out.constInt32(MarkedSpace::preciseStep - 1));
-        LValue isSmall = m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::preciseCutoff));
-        m_out.branch(isSmall, unsure(smallCaseBlock), unsure(largeOrOversizeCaseBlock));
-        
-        LBasicBlock lastNext = m_out.appendTo(smallCaseBlock, largeOrOversizeCaseBlock);
-        TypedPointer address = m_out.baseIndex(
-            m_heaps.MarkedSpace_Subspace_preciseAllocators, subspace,
-            m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::preciseStep)))));
-        ValueFromBlock smallAllocator = m_out.anchor(address.value());
-        m_out.jump(continuation);
-        
-        m_out.appendTo(largeOrOversizeCaseBlock, largeCaseBlock);
-        m_out.branch(
-            m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::impreciseCutoff)),
-            usually(largeCaseBlock), rarely(slowPath));
-        
-        m_out.appendTo(largeCaseBlock, continuation);
-        address = m_out.baseIndex(
-            m_heaps.MarkedSpace_Subspace_impreciseAllocators, subspace,
-            m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::impreciseStep)))));
-        ValueFromBlock largeAllocator = m_out.anchor(address.value());
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        LValue allocator = m_out.phi(m_out.intPtr, smallAllocator, largeAllocator);
-        
-        return allocateObject(allocator, structure, butterfly, slowPath);
-    }
-    
-    // Returns a pointer to the end of the allocation.
-    LValue allocateBasicStorageAndGetEnd(LValue size, LBasicBlock slowPath)
-    {
-        CopiedAllocator&amp; allocator = vm().heap.storageAllocator();
-        
-        LBasicBlock success = FTL_NEW_BLOCK(m_out, (&quot;storage allocation success&quot;));
-        
-        LValue remaining = m_out.loadPtr(m_out.absolute(&amp;allocator.m_currentRemaining));
-        LValue newRemaining = m_out.sub(remaining, size);
-        
-        m_out.branch(
-            m_out.lessThan(newRemaining, m_out.intPtrZero),
-            rarely(slowPath), usually(success));
-        
-        m_out.appendTo(success);
-        
-        m_out.storePtr(newRemaining, m_out.absolute(&amp;allocator.m_currentRemaining));
-        return m_out.sub(
-            m_out.loadPtr(m_out.absolute(&amp;allocator.m_currentPayloadEnd)), newRemaining);
-    }
-
-    LValue allocateBasicStorage(LValue size, LBasicBlock slowPath)
-    {
-        return m_out.sub(allocateBasicStorageAndGetEnd(size, slowPath), size);
-    }
-    
-    LValue allocateObject(Structure* structure)
-    {
-        size_t allocationSize = JSFinalObject::allocationSize(structure-&gt;inlineCapacity());
-        MarkedAllocator* allocator = &amp;vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
-        
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;allocateObject slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;allocateObject continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        ValueFromBlock fastResult = m_out.anchor(allocateObject(
-            m_out.constIntPtr(allocator), structure, m_out.intPtrZero, slowPath));
-        
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-
-        LValue slowResultValue = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationNewObject, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(structure));
-            });
-        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.intPtr, fastResult, slowResult);
-    }
-    
-    struct ArrayValues {
-        ArrayValues()
-            : array(0)
-            , butterfly(0)
-        {
-        }
-        
-        ArrayValues(LValue array, LValue butterfly)
-            : array(array)
-            , butterfly(butterfly)
-        {
-        }
-        
-        LValue array;
-        LValue butterfly;
-    };
-    ArrayValues allocateJSArray(
-        Structure* structure, unsigned numElements, LBasicBlock slowPath)
-    {
-        ASSERT(
-            hasUndecided(structure-&gt;indexingType())
-            || hasInt32(structure-&gt;indexingType())
-            || hasDouble(structure-&gt;indexingType())
-            || hasContiguous(structure-&gt;indexingType()));
-        
-        unsigned vectorLength = std::max(BASE_VECTOR_LEN, numElements);
-        
-        LValue endOfStorage = allocateBasicStorageAndGetEnd(
-            m_out.constIntPtr(sizeof(JSValue) * vectorLength + sizeof(IndexingHeader)),
-            slowPath);
-        
-        LValue butterfly = m_out.sub(
-            endOfStorage, m_out.constIntPtr(sizeof(JSValue) * vectorLength));
-        
-        LValue object = allocateObject&lt;JSArray&gt;(
-            structure, butterfly, slowPath);
-        
-        m_out.store32(m_out.constInt32(numElements), butterfly, m_heaps.Butterfly_publicLength);
-        m_out.store32(m_out.constInt32(vectorLength), butterfly, m_heaps.Butterfly_vectorLength);
-        
-        if (hasDouble(structure-&gt;indexingType())) {
-            for (unsigned i = numElements; i &lt; vectorLength; ++i) {
-                m_out.store64(
-                    m_out.constInt64(bitwise_cast&lt;int64_t&gt;(PNaN)),
-                    butterfly, m_heaps.indexedDoubleProperties[i]);
-            }
-        }
-        
-        return ArrayValues(object, butterfly);
-    }
-    
-    ArrayValues allocateJSArray(Structure* structure, unsigned numElements)
-    {
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;JSArray allocation slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;JSArray allocation continuation&quot;));
-        
-        LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        ArrayValues fastValues = allocateJSArray(structure, numElements, slowPath);
-        ValueFromBlock fastArray = m_out.anchor(fastValues.array);
-        ValueFromBlock fastButterfly = m_out.anchor(fastValues.butterfly);
-        
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-
-        LValue slowArrayValue = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationNewArrayWithSize, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(structure), CCallHelpers::TrustedImm32(numElements));
-            });
-        ValueFromBlock slowArray = m_out.anchor(slowArrayValue);
-        ValueFromBlock slowButterfly = m_out.anchor(
-            m_out.loadPtr(slowArrayValue, m_heaps.JSObject_butterfly));
-
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        
-        return ArrayValues(
-            m_out.phi(m_out.intPtr, fastArray, slowArray),
-            m_out.phi(m_out.intPtr, fastButterfly, slowButterfly));
-    }
-    
-    LValue boolify(Edge edge)
-    {
-        switch (edge.useKind()) {
-        case BooleanUse:
-        case KnownBooleanUse:
-            return lowBoolean(edge);
-        case Int32Use:
-            return m_out.notZero32(lowInt32(edge));
-        case DoubleRepUse:
-            return m_out.doubleNotEqualAndOrdered(lowDouble(edge), m_out.doubleZero);
-        case ObjectOrOtherUse:
-            return m_out.logicalNot(
-                equalNullOrUndefined(
-                    edge, CellCaseSpeculatesObject, SpeculateNullOrUndefined,
-                    ManualOperandSpeculation));
-        case StringUse: {
-            LValue stringValue = lowString(edge);
-            LValue length = m_out.load32NonNegative(stringValue, m_heaps.JSString_length);
-            return m_out.notEqual(length, m_out.int32Zero);
-        }
-        case UntypedUse: {
-            LValue value = lowJSValue(edge);
-            
-            // Implements the following control flow structure:
-            // if (value is cell) {
-            //     if (value is string)
-            //         result = !!value-&gt;length
-            //     else {
-            //         do evil things for masquerades-as-undefined
-            //         result = true
-            //     }
-            // } else if (value is int32) {
-            //     result = !!unboxInt32(value)
-            // } else if (value is number) {
-            //     result = !!unboxDouble(value)
-            // } else {
-            //     result = value == jsTrue
-            // }
-            
-            LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped cell case&quot;));
-            LBasicBlock stringCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped string case&quot;));
-            LBasicBlock notStringCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not string case&quot;));
-            LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not cell case&quot;));
-            LBasicBlock int32Case = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped int32 case&quot;));
-            LBasicBlock notInt32Case = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not int32 case&quot;));
-            LBasicBlock doubleCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped double case&quot;));
-            LBasicBlock notDoubleCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped not double case&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped continuation&quot;));
-            
-            Vector&lt;ValueFromBlock&gt; results;
-            
-            m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase));
-            
-            LBasicBlock lastNext = m_out.appendTo(cellCase, stringCase);
-            m_out.branch(
-                isString(value, provenType(edge) &amp; SpecCell),
-                unsure(stringCase), unsure(notStringCase));
-            
-            m_out.appendTo(stringCase, notStringCase);
-            LValue nonEmptyString = m_out.notZero32(
-                m_out.load32NonNegative(value, m_heaps.JSString_length));
-            results.append(m_out.anchor(nonEmptyString));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(notStringCase, notCellCase);
-            LValue isTruthyObject;
-            if (masqueradesAsUndefinedWatchpointIsStillValid())
-                isTruthyObject = m_out.booleanTrue;
-            else {
-                LBasicBlock masqueradesCase = FTL_NEW_BLOCK(m_out, (&quot;Boolify untyped masquerades case&quot;));
-                
-                results.append(m_out.anchor(m_out.booleanTrue));
-                
-                m_out.branch(
-                    m_out.testIsZero32(
-                        m_out.load8ZeroExt32(value, m_heaps.JSCell_typeInfoFlags),
-                        m_out.constInt32(MasqueradesAsUndefined)),
-                    usually(continuation), rarely(masqueradesCase));
-                
-                m_out.appendTo(masqueradesCase);
-                
-                isTruthyObject = m_out.notEqual(
-                    m_out.constIntPtr(m_graph.globalObjectFor(m_node-&gt;origin.semantic)),
-                    m_out.loadPtr(loadStructure(value), m_heaps.Structure_globalObject));
-            }
-            results.append(m_out.anchor(isTruthyObject));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(notCellCase, int32Case);
-            m_out.branch(
-                isInt32(value, provenType(edge) &amp; ~SpecCell),
-                unsure(int32Case), unsure(notInt32Case));
-            
-            m_out.appendTo(int32Case, notInt32Case);
-            results.append(m_out.anchor(m_out.notZero32(unboxInt32(value))));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(notInt32Case, doubleCase);
-            m_out.branch(
-                isNumber(value, provenType(edge) &amp; ~SpecCell),
-                unsure(doubleCase), unsure(notDoubleCase));
-            
-            m_out.appendTo(doubleCase, notDoubleCase);
-            LValue doubleIsTruthy = m_out.doubleNotEqualAndOrdered(
-                unboxDouble(value), m_out.constDouble(0));
-            results.append(m_out.anchor(doubleIsTruthy));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(notDoubleCase, continuation);
-            LValue miscIsTruthy = m_out.equal(
-                value, m_out.constInt64(JSValue::encode(jsBoolean(true))));
-            results.append(m_out.anchor(miscIsTruthy));
-            m_out.jump(continuation);
-            
-            m_out.appendTo(continuation, lastNext);
-            return m_out.phi(m_out.boolean, results);
-        }
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-            return 0;
-        }
-    }
-    
-    enum StringOrObjectMode {
-        AllCellsAreFalse,
-        CellCaseSpeculatesObject
-    };
-    enum EqualNullOrUndefinedMode {
-        EqualNull,
-        EqualUndefined,
-        EqualNullOrUndefined,
-        SpeculateNullOrUndefined
-    };
-    LValue equalNullOrUndefined(
-        Edge edge, StringOrObjectMode cellMode, EqualNullOrUndefinedMode primitiveMode,
-        OperandSpeculationMode operandMode = AutomaticOperandSpeculation)
-    {
-        bool validWatchpoint = masqueradesAsUndefinedWatchpointIsStillValid();
-        
-        LValue value = lowJSValue(edge, operandMode);
-        
-        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined cell case&quot;));
-        LBasicBlock primitiveCase = FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined primitive case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined continuation&quot;));
-        
-        m_out.branch(isNotCell(value, provenType(edge)), unsure(primitiveCase), unsure(cellCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(cellCase, primitiveCase);
-        
-        Vector&lt;ValueFromBlock, 3&gt; results;
-        
-        switch (cellMode) {
-        case AllCellsAreFalse:
-            break;
-        case CellCaseSpeculatesObject:
-            FTL_TYPE_CHECK(
-                jsValueValue(value), edge, (~SpecCell) | SpecObject, isNotObject(value));
-            break;
-        }
-        
-        if (validWatchpoint) {
-            results.append(m_out.anchor(m_out.booleanFalse));
-            m_out.jump(continuation);
-        } else {
-            LBasicBlock masqueradesCase =
-                FTL_NEW_BLOCK(m_out, (&quot;EqualNullOrUndefined masquerades case&quot;));
-                
-            results.append(m_out.anchor(m_out.booleanFalse));
-            
-            m_out.branch(
-                m_out.testNonZero32(
-                    m_out.load8ZeroExt32(value, m_heaps.JSCell_typeInfoFlags),
-                    m_out.constInt32(MasqueradesAsUndefined)),
-                rarely(masqueradesCase), usually(continuation));
-            
-            m_out.appendTo(masqueradesCase, primitiveCase);
-            
-            LValue structure = loadStructure(value);
-            
-            results.append(m_out.anchor(
-                m_out.equal(
-                    m_out.constIntPtr(m_graph.globalObjectFor(m_node-&gt;origin.semantic)),
-                    m_out.loadPtr(structure, m_heaps.Structure_globalObject))));
-            m_out.jump(continuation);
-        }
-        
-        m_out.appendTo(primitiveCase, continuation);
-        
-        LValue primitiveResult;
-        switch (primitiveMode) {
-        case EqualNull:
-            primitiveResult = m_out.equal(value, m_out.constInt64(ValueNull));
-            break;
-        case EqualUndefined:
-            primitiveResult = m_out.equal(value, m_out.constInt64(ValueUndefined));
-            break;
-        case EqualNullOrUndefined:
-            primitiveResult = isOther(value, provenType(edge));
-            break;
-        case SpeculateNullOrUndefined:
-            FTL_TYPE_CHECK(
-                jsValueValue(value), edge, SpecCell | SpecOther, isNotOther(value));
-            primitiveResult = m_out.booleanTrue;
-            break;
-        }
-        results.append(m_out.anchor(primitiveResult));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        
-        return m_out.phi(m_out.boolean, results);
-    }
-    
-    template&lt;typename FunctionType&gt;
-    void contiguousPutByValOutOfBounds(
-        FunctionType slowPathFunction, LValue base, LValue storage, LValue index, LValue value,
-        LBasicBlock continuation)
-    {
-        LValue isNotInBounds = m_out.aboveOrEqual(
-            index, m_out.load32NonNegative(storage, m_heaps.Butterfly_publicLength));
-        if (!m_node-&gt;arrayMode().isInBounds()) {
-            LBasicBlock notInBoundsCase =
-                FTL_NEW_BLOCK(m_out, (&quot;PutByVal not in bounds&quot;));
-            LBasicBlock performStore =
-                FTL_NEW_BLOCK(m_out, (&quot;PutByVal perform store&quot;));
-                
-            m_out.branch(isNotInBounds, unsure(notInBoundsCase), unsure(performStore));
-                
-            LBasicBlock lastNext = m_out.appendTo(notInBoundsCase, performStore);
-                
-            LValue isOutOfBounds = m_out.aboveOrEqual(
-                index, m_out.load32NonNegative(storage, m_heaps.Butterfly_vectorLength));
-                
-            if (!m_node-&gt;arrayMode().isOutOfBounds())
-                speculate(OutOfBounds, noValue(), 0, isOutOfBounds);
-            else {
-                LBasicBlock outOfBoundsCase =
-                    FTL_NEW_BLOCK(m_out, (&quot;PutByVal out of bounds&quot;));
-                LBasicBlock holeCase =
-                    FTL_NEW_BLOCK(m_out, (&quot;PutByVal hole case&quot;));
-                    
-                m_out.branch(isOutOfBounds, rarely(outOfBoundsCase), usually(holeCase));
-                    
-                LBasicBlock innerLastNext = m_out.appendTo(outOfBoundsCase, holeCase);
-                    
-                vmCall(
-                    m_out.voidType, m_out.operation(slowPathFunction),
-                    m_callFrame, base, index, value);
-                    
-                m_out.jump(continuation);
-                    
-                m_out.appendTo(holeCase, innerLastNext);
-            }
-            
-            m_out.store32(
-                m_out.add(index, m_out.int32One),
-                storage, m_heaps.Butterfly_publicLength);
-                
-            m_out.jump(performStore);
-            m_out.appendTo(performStore, lastNext);
-        }
-    }
-    
-    void buildSwitch(SwitchData* data, LType type, LValue switchValue)
-    {
-        ASSERT(type == m_out.intPtr || type == m_out.int32);
-
-        Vector&lt;SwitchCase&gt; cases;
-        for (unsigned i = 0; i &lt; data-&gt;cases.size(); ++i) {
-            SwitchCase newCase;
-
-            if (type == m_out.intPtr) {
-                newCase = SwitchCase(m_out.constIntPtr(data-&gt;cases[i].value.switchLookupValue(data-&gt;kind)),
-                    lowBlock(data-&gt;cases[i].target.block), Weight(data-&gt;cases[i].target.count));
-            } else if (type == m_out.int32) {
-                newCase = SwitchCase(m_out.constInt32(data-&gt;cases[i].value.switchLookupValue(data-&gt;kind)),
-                    lowBlock(data-&gt;cases[i].target.block), Weight(data-&gt;cases[i].target.count));
-            } else
-                CRASH();
-
-            cases.append(newCase);
-        }
-        
-        m_out.switchInstruction(
-            switchValue, cases,
-            lowBlock(data-&gt;fallThrough.block), Weight(data-&gt;fallThrough.count));
-    }
-    
-    void switchString(SwitchData* data, LValue string)
-    {
-        bool canDoBinarySwitch = true;
-        unsigned totalLength = 0;
-        
-        for (DFG::SwitchCase myCase : data-&gt;cases) {
-            StringImpl* string = myCase.value.stringImpl();
-            if (!string-&gt;is8Bit()) {
-                canDoBinarySwitch = false;
-                break;
-            }
-            if (string-&gt;length() &gt; Options::maximumBinaryStringSwitchCaseLength()) {
-                canDoBinarySwitch = false;
-                break;
-            }
-            totalLength += string-&gt;length();
-        }
-        
-        if (!canDoBinarySwitch || totalLength &gt; Options::maximumBinaryStringSwitchTotalLength()) {
-            switchStringSlow(data, string);
-            return;
-        }
-        
-        LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value);
-        LValue length = m_out.load32(string, m_heaps.JSString_length);
-        
-        LBasicBlock hasImplBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString has impl case&quot;));
-        LBasicBlock is8BitBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString is 8 bit case&quot;));
-        LBasicBlock slowBlock = FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString slow case&quot;));
-        
-        m_out.branch(m_out.isNull(stringImpl), unsure(slowBlock), unsure(hasImplBlock));
-        
-        LBasicBlock lastNext = m_out.appendTo(hasImplBlock, is8BitBlock);
-        
-        m_out.branch(
-            m_out.testIsZero32(
-                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
-                m_out.constInt32(StringImpl::flagIs8Bit())),
-            unsure(slowBlock), unsure(is8BitBlock));
-        
-        m_out.appendTo(is8BitBlock, slowBlock);
-        
-        LValue buffer = m_out.loadPtr(stringImpl, m_heaps.StringImpl_data);
-        
-        // FIXME: We should propagate branch weight data to the cases of this switch.
-        // https://bugs.webkit.org/show_bug.cgi?id=144368
-        
-        Vector&lt;StringSwitchCase&gt; cases;
-        for (DFG::SwitchCase myCase : data-&gt;cases)
-            cases.append(StringSwitchCase(myCase.value.stringImpl(), lowBlock(myCase.target.block)));
-        std::sort(cases.begin(), cases.end());
-        switchStringRecurse(data, buffer, length, cases, 0, 0, cases.size(), 0, false);
-
-        m_out.appendTo(slowBlock, lastNext);
-        switchStringSlow(data, string);
-    }
-    
-    // The code for string switching is based closely on the same code in the DFG backend. While it
-    // would be nice to reduce the amount of similar-looking code, it seems like this is one of
-    // those algorithms where factoring out the common bits would result in more code than just
-    // duplicating.
-    
-    struct StringSwitchCase {
-        StringSwitchCase() { }
-        
-        StringSwitchCase(StringImpl* string, LBasicBlock target)
-            : string(string)
-            , target(target)
-        {
-        }
-
-        bool operator&lt;(const StringSwitchCase&amp; other) const
-        {
-            return stringLessThan(*string, *other.string);
-        }
-        
-        StringImpl* string;
-        LBasicBlock target;
-    };
-    
-    struct CharacterCase {
-        CharacterCase()
-            : character(0)
-            , begin(0)
-            , end(0)
-        {
-        }
-        
-        CharacterCase(LChar character, unsigned begin, unsigned end)
-            : character(character)
-            , begin(begin)
-            , end(end)
-        {
-        }
-        
-        bool operator&lt;(const CharacterCase&amp; other) const
-        {
-            return character &lt; other.character;
-        }
-        
-        LChar character;
-        unsigned begin;
-        unsigned end;
-    };
-    
-    void switchStringRecurse(
-        SwitchData* data, LValue buffer, LValue length, const Vector&lt;StringSwitchCase&gt;&amp; cases,
-        unsigned numChecked, unsigned begin, unsigned end, unsigned alreadyCheckedLength,
-        unsigned checkedExactLength)
-    {
-        LBasicBlock fallThrough = lowBlock(data-&gt;fallThrough.block);
-        
-        if (begin == end) {
-            m_out.jump(fallThrough);
-            return;
-        }
-        
-        unsigned minLength = cases[begin].string-&gt;length();
-        unsigned commonChars = minLength;
-        bool allLengthsEqual = true;
-        for (unsigned i = begin + 1; i &lt; end; ++i) {
-            unsigned myCommonChars = numChecked;
-            unsigned limit = std::min(cases[begin].string-&gt;length(), cases[i].string-&gt;length());
-            for (unsigned j = numChecked; j &lt; limit; ++j) {
-                if (cases[begin].string-&gt;at(j) != cases[i].string-&gt;at(j))
-                    break;
-                myCommonChars++;
-            }
-            commonChars = std::min(commonChars, myCommonChars);
-            if (minLength != cases[i].string-&gt;length())
-                allLengthsEqual = false;
-            minLength = std::min(minLength, cases[i].string-&gt;length());
-        }
-        
-        if (checkedExactLength) {
-            DFG_ASSERT(m_graph, m_node, alreadyCheckedLength == minLength);
-            DFG_ASSERT(m_graph, m_node, allLengthsEqual);
-        }
-        
-        DFG_ASSERT(m_graph, m_node, minLength &gt;= commonChars);
-        
-        if (!allLengthsEqual &amp;&amp; alreadyCheckedLength &lt; minLength)
-            m_out.check(m_out.below(length, m_out.constInt32(minLength)), unsure(fallThrough));
-        if (allLengthsEqual &amp;&amp; (alreadyCheckedLength &lt; minLength || !checkedExactLength))
-            m_out.check(m_out.notEqual(length, m_out.constInt32(minLength)), unsure(fallThrough));
-        
-        for (unsigned i = numChecked; i &lt; commonChars; ++i) {
-            m_out.check(
-                m_out.notEqual(
-                    m_out.load8ZeroExt32(buffer, m_heaps.characters8[i]),
-                    m_out.constInt32(static_cast&lt;uint16_t&gt;(cases[begin].string-&gt;at(i)))),
-                unsure(fallThrough));
-        }
-        
-        if (minLength == commonChars) {
-            // This is the case where one of the cases is a prefix of all of the other cases.
-            // We've already checked that the input string is a prefix of all of the cases,
-            // so we just check length to jump to that case.
-            
-            DFG_ASSERT(m_graph, m_node, cases[begin].string-&gt;length() == commonChars);
-            for (unsigned i = begin + 1; i &lt; end; ++i)
-                DFG_ASSERT(m_graph, m_node, cases[i].string-&gt;length() &gt; commonChars);
-            
-            if (allLengthsEqual) {
-                DFG_ASSERT(m_graph, m_node, end == begin + 1);
-                m_out.jump(cases[begin].target);
-                return;
-            }
-            
-            m_out.check(
-                m_out.equal(length, m_out.constInt32(commonChars)),
-                unsure(cases[begin].target));
-            
-            // We've checked if the length is &gt;= minLength, and then we checked if the length is
-            // == commonChars. We get to this point if it is &gt;= minLength but not == commonChars.
-            // Hence we know that it now must be &gt; minLength, i.e. that it's &gt;= minLength + 1.
-            switchStringRecurse(
-                data, buffer, length, cases, commonChars, begin + 1, end, minLength + 1, false);
-            return;
-        }
-        
-        // At this point we know that the string is longer than commonChars, and we've only verified
-        // commonChars. Use a binary switch on the next unchecked character, i.e.
-        // string[commonChars].
-        
-        DFG_ASSERT(m_graph, m_node, end &gt;= begin + 2);
-        
-        LValue uncheckedChar = m_out.load8ZeroExt32(buffer, m_heaps.characters8[commonChars]);
-        
-        Vector&lt;CharacterCase&gt; characterCases;
-        CharacterCase currentCase(cases[begin].string-&gt;at(commonChars), begin, begin + 1);
-        for (unsigned i = begin + 1; i &lt; end; ++i) {
-            LChar currentChar = cases[i].string-&gt;at(commonChars);
-            if (currentChar != currentCase.character) {
-                currentCase.end = i;
-                characterCases.append(currentCase);
-                currentCase = CharacterCase(currentChar, i, i + 1);
-            } else
-                currentCase.end = i + 1;
-        }
-        characterCases.append(currentCase);
-        
-        Vector&lt;LBasicBlock&gt; characterBlocks;
-        for (CharacterCase&amp; myCase : characterCases)
-            characterBlocks.append(FTL_NEW_BLOCK(m_out, (&quot;Switch/SwitchString case for &quot;, myCase.character, &quot; at index &quot;, commonChars)));
-        
-        Vector&lt;SwitchCase&gt; switchCases;
-        for (unsigned i = 0; i &lt; characterCases.size(); ++i) {
-            if (i)
-                DFG_ASSERT(m_graph, m_node, characterCases[i - 1].character &lt; characterCases[i].character);
-            switchCases.append(SwitchCase(
-                m_out.constInt32(characterCases[i].character), characterBlocks[i], Weight()));
-        }
-        m_out.switchInstruction(uncheckedChar, switchCases, fallThrough, Weight());
-        
-        LBasicBlock lastNext = m_out.m_nextBlock;
-        characterBlocks.append(lastNext); // Makes it convenient to set nextBlock.
-        for (unsigned i = 0; i &lt; characterCases.size(); ++i) {
-            m_out.appendTo(characterBlocks[i], characterBlocks[i + 1]);
-            switchStringRecurse(
-                data, buffer, length, cases, commonChars + 1,
-                characterCases[i].begin, characterCases[i].end, minLength, allLengthsEqual);
-        }
-        
-        DFG_ASSERT(m_graph, m_node, m_out.m_nextBlock == lastNext);
-    }
-    
-    void switchStringSlow(SwitchData* data, LValue string)
-    {
-        // FIXME: We ought to be able to use computed gotos here. We would save the labels of the
-        // blocks we want to jump to, and then request their addresses after compilation completes.
-        // https://bugs.webkit.org/show_bug.cgi?id=144369
-        
-        LValue branchOffset = vmCall(
-            m_out.int32, m_out.operation(operationSwitchStringAndGetBranchOffset),
-            m_callFrame, m_out.constIntPtr(data-&gt;switchTableIndex), string);
-        
-        StringJumpTable&amp; table = codeBlock()-&gt;stringSwitchJumpTable(data-&gt;switchTableIndex);
-        
-        Vector&lt;SwitchCase&gt; cases;
-        std::unordered_set&lt;int32_t&gt; alreadyHandled; // These may be negative, or zero, or probably other stuff, too. We don't want to mess with HashSet's corner cases and we don't really care about throughput here.
-        for (unsigned i = 0; i &lt; data-&gt;cases.size(); ++i) {
-            // FIXME: The fact that we're using the bytecode's switch table means that the
-            // following DFG IR transformation would be invalid.
-            //
-            // Original code:
-            //     switch (v) {
-            //     case &quot;foo&quot;:
-            //     case &quot;bar&quot;:
-            //         things();
-            //         break;
-            //     default:
-            //         break;
-            //     }
-            //
-            // New code:
-            //     switch (v) {
-            //     case &quot;foo&quot;:
-            //         instrumentFoo();
-            //         goto _things;
-            //     case &quot;bar&quot;:
-            //         instrumentBar();
-            //     _things:
-            //         things();
-            //         break;
-            //     default:
-            //         break;
-            //     }
-            //
-            // Luckily, we don't currently do any such transformation. But it's kind of silly that
-            // this is an issue.
-            // https://bugs.webkit.org/show_bug.cgi?id=144635
-            
-            DFG::SwitchCase myCase = data-&gt;cases[i];
-            StringJumpTable::StringOffsetTable::iterator iter =
-                table.offsetTable.find(myCase.value.stringImpl());
-            DFG_ASSERT(m_graph, m_node, iter != table.offsetTable.end());
-            
-            if (!alreadyHandled.insert(iter-&gt;value.branchOffset).second)
-                continue;
-
-            cases.append(SwitchCase(
-                m_out.constInt32(iter-&gt;value.branchOffset),
-                lowBlock(myCase.target.block), Weight(myCase.target.count)));
-        }
-        
-        m_out.switchInstruction(
-            branchOffset, cases, lowBlock(data-&gt;fallThrough.block),
-            Weight(data-&gt;fallThrough.count));
-    }
-    
-    // Calls the functor at the point of code generation where we know what the result type is.
-    // You can emit whatever code you like at that point. Expects you to terminate the basic block.
-    // When buildTypeOf() returns, it will have terminated all basic blocks that it created. So, if
-    // you aren't using this as the terminator of a high-level block, you should create your own
-    // contination and set it as the nextBlock (m_out.insertNewBlocksBefore(continuation)) before
-    // calling this. For example:
-    //
-    // LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;My continuation&quot;));
-    // LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
-    // buildTypeOf(
-    //     child, value,
-    //     [&amp;] (TypeofType type) {
-    //          do things;
-    //          m_out.jump(continuation);
-    //     });
-    // m_out.appendTo(continuation, lastNext);
-    template&lt;typename Functor&gt;
-    void buildTypeOf(Edge child, LValue value, const Functor&amp; functor)
-    {
-        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node-&gt;origin.semantic);
-        
-        // Implements the following branching structure:
-        //
-        // if (is cell) {
-        //     if (is object) {
-        //         if (is function) {
-        //             return function;
-        //         } else if (doesn't have call trap and doesn't masquerade as undefined) {
-        //             return object
-        //         } else {
-        //             return slowPath();
-        //         }
-        //     } else if (is string) {
-        //         return string
-        //     } else {
-        //         return symbol
-        //     }
-        // } else if (is number) {
-        //     return number
-        // } else if (is null) {
-        //     return object
-        // } else if (is boolean) {
-        //     return boolean
-        // } else {
-        //     return undefined
-        // }
-        
-        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf cell case&quot;));
-        LBasicBlock objectCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf object case&quot;));
-        LBasicBlock functionCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf function case&quot;));
-        LBasicBlock notFunctionCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not function case&quot;));
-        LBasicBlock reallyObjectCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf really object case&quot;));
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf slow path&quot;));
-        LBasicBlock unreachable = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf unreachable&quot;));
-        LBasicBlock notObjectCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not object case&quot;));
-        LBasicBlock stringCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf string case&quot;));
-        LBasicBlock symbolCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf symbol case&quot;));
-        LBasicBlock notCellCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not cell case&quot;));
-        LBasicBlock numberCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf number case&quot;));
-        LBasicBlock notNumberCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not number case&quot;));
-        LBasicBlock notNullCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf not null case&quot;));
-        LBasicBlock booleanCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf boolean case&quot;));
-        LBasicBlock undefinedCase = FTL_NEW_BLOCK(m_out, (&quot;buildTypeOf undefined case&quot;));
-        
-        m_out.branch(isCell(value, provenType(child)), unsure(cellCase), unsure(notCellCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(cellCase, objectCase);
-        m_out.branch(isObject(value, provenType(child)), unsure(objectCase), unsure(notObjectCase));
-        
-        m_out.appendTo(objectCase, functionCase);
-        m_out.branch(
-            isFunction(value, provenType(child) &amp; SpecObject),
-            unsure(functionCase), unsure(notFunctionCase));
-        
-        m_out.appendTo(functionCase, notFunctionCase);
-        functor(TypeofType::Function);
-        
-        m_out.appendTo(notFunctionCase, reallyObjectCase);
-        m_out.branch(
-            isExoticForTypeof(value, provenType(child) &amp; (SpecObject - SpecFunction)),
-            rarely(slowPath), usually(reallyObjectCase));
-        
-        m_out.appendTo(reallyObjectCase, slowPath);
-        functor(TypeofType::Object);
-        
-        m_out.appendTo(slowPath, unreachable);
-        LValue result = lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                return createLazyCallGenerator(
-                    operationTypeOfObjectAsTypeofType, locations[0].directGPR(),
-                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
-            }, value);
-        Vector&lt;SwitchCase, 3&gt; cases;
-        cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Undefined)), undefinedCase));
-        cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Object)), reallyObjectCase));
-        cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Function)), functionCase));
-        m_out.switchInstruction(m_out.castToInt32(result), cases, unreachable, Weight());
-        
-        m_out.appendTo(unreachable, notObjectCase);
-        m_out.unreachable();
-        
-        m_out.appendTo(notObjectCase, stringCase);
-        m_out.branch(
-            isString(value, provenType(child) &amp; (SpecCell - SpecObject)),
-            unsure(stringCase), unsure(symbolCase));
-        
-        m_out.appendTo(stringCase, symbolCase);
-        functor(TypeofType::String);
-        
-        m_out.appendTo(symbolCase, notCellCase);
-        functor(TypeofType::Symbol);
-        
-        m_out.appendTo(notCellCase, numberCase);
-        m_out.branch(
-            isNumber(value, provenType(child) &amp; ~SpecCell),
-            unsure(numberCase), unsure(notNumberCase));
-        
-        m_out.appendTo(numberCase, notNumberCase);
-        functor(TypeofType::Number);
-        
-        m_out.appendTo(notNumberCase, notNullCase);
-        LValue isNull;
-        if (provenType(child) &amp; SpecOther)
-            isNull = m_out.equal(value, m_out.constInt64(ValueNull));
-        else
-            isNull = m_out.booleanFalse;
-        m_out.branch(isNull, unsure(reallyObjectCase), unsure(notNullCase));
-        
-        m_out.appendTo(notNullCase, booleanCase);
-        m_out.branch(
-            isBoolean(value, provenType(child) &amp; ~(SpecCell | SpecFullNumber)),
-            unsure(booleanCase), unsure(undefinedCase));
-        
-        m_out.appendTo(booleanCase, undefinedCase);
-        functor(TypeofType::Boolean);
-        
-        m_out.appendTo(undefinedCase, lastNext);
-        functor(TypeofType::Undefined);
-    }
-    
-    LValue doubleToInt32(LValue doubleValue, double low, double high, bool isSigned = true)
-    {
-        LBasicBlock greatEnough = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 greatEnough&quot;));
-        LBasicBlock withinRange = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 withinRange&quot;));
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 slowPath&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;doubleToInt32 continuation&quot;));
-        
-        Vector&lt;ValueFromBlock, 2&gt; results;
-        
-        m_out.branch(
-            m_out.doubleGreaterThanOrEqual(doubleValue, m_out.constDouble(low)),
-            unsure(greatEnough), unsure(slowPath));
-        
-        LBasicBlock lastNext = m_out.appendTo(greatEnough, withinRange);
-        m_out.branch(
-            m_out.doubleLessThanOrEqual(doubleValue, m_out.constDouble(high)),
-            unsure(withinRange), unsure(slowPath));
-        
-        m_out.appendTo(withinRange, slowPath);
-        LValue fastResult;
-        if (isSigned)
-            fastResult = m_out.doubleToInt(doubleValue);
-        else
-            fastResult = m_out.doubleToUInt(doubleValue);
-        results.append(m_out.anchor(fastResult));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(slowPath, continuation);
-        results.append(m_out.anchor(m_out.call(m_out.int32, m_out.operation(toInt32), doubleValue)));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.int32, results);
-    }
-    
-    LValue doubleToInt32(LValue doubleValue)
-    {
-        if (Output::hasSensibleDoubleToInt())
-            return sensibleDoubleToInt32(doubleValue);
-        
-        double limit = pow(2, 31) - 1;
-        return doubleToInt32(doubleValue, -limit, limit);
-    }
-    
-    LValue sensibleDoubleToInt32(LValue doubleValue)
-    {
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;sensible doubleToInt32 slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;sensible doubleToInt32 continuation&quot;));
-
-        LValue fastResultValue = m_out.doubleToInt(doubleValue);
-        ValueFromBlock fastResult = m_out.anchor(fastResultValue);
-        m_out.branch(
-            m_out.equal(fastResultValue, m_out.constInt32(0x80000000)),
-            rarely(slowPath), usually(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
-        ValueFromBlock slowResult = m_out.anchor(
-            m_out.call(m_out.int32, m_out.operation(toInt32), doubleValue));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.int32, fastResult, slowResult);
-    }
-
-    // This is a mechanism for creating a code generator that fills in a gap in the code using our
-    // own MacroAssembler. This is useful for slow paths that involve a lot of code and we don't want
-    // to pay the price of LLVM optimizing it. A lazy slow path will only be generated if it actually
-    // executes. On the other hand, a lazy slow path always incurs the cost of two additional jumps.
-    // Also, the lazy slow path's register allocation state is slaved to whatever LLVM did, so you
-    // have to use a ScratchRegisterAllocator to try to use some unused registers and you may have
-    // to spill to top of stack if there aren't enough registers available.
-    //
-    // Lazy slow paths involve three different stages of execution. Each stage has unique
-    // capabilities and knowledge. The stages are:
-    //
-    // 1) DFG-&gt;LLVM lowering, i.e. code that runs in this phase. Lowering is the last time you will
-    //    have access to LValues. If there is an LValue that needs to be fed as input to a lazy slow
-    //    path, then you must pass it as an argument here (as one of the varargs arguments after the
-    //    functor). But, lowering doesn't know which registers will be used for those LValues. Hence
-    //    you pass a lambda to lazySlowPath() and that lambda will run during stage (2):
-    //
-    // 2) FTLCompile.cpp's fixFunctionBasedOnStackMaps. This code is the only stage at which we know
-    //    the mapping from arguments passed to this method in (1) and the registers that LLVM
-    //    selected for those arguments. You don't actually want to generate any code here, since then
-    //    the slow path wouldn't actually be lazily generated. Instead, you want to save the
-    //    registers being used for the arguments and defer code generation to stage (3) by creating
-    //    and returning a LazySlowPath::Generator:
-    //
-    // 3) LazySlowPath's generate() method. This code runs in response to the lazy slow path
-    //    executing for the first time. It will call the generator you created in stage (2).
-    //
-    // Note that each time you invoke stage (1), stage (2) may be invoked zero, one, or many times.
-    // Stage (2) will usually be invoked once for stage (1). But, LLVM may kill the code, in which
-    // case stage (2) won't run. LLVM may duplicate the code (for example via jump threading),
-    // leading to many calls to your stage (2) lambda. Stage (3) may be called zero or once for each
-    // stage (2). It will be called zero times if the slow path never runs. This is what you hope for
-    // whenever you use the lazySlowPath() mechanism.
-    //
-    // A typical use of lazySlowPath() will look like the example below, which just creates a slow
-    // path that adds some value to the input and returns it.
-    //
-    // // Stage (1) is here. This is your last chance to figure out which LValues to use as inputs.
-    // // Notice how we pass &quot;input&quot; as an argument to lazySlowPath().
-    // LValue input = ...;
-    // int addend = ...;
-    // LValue output = lazySlowPath(
-    //     [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-    //         // Stage (2) is here. This is your last chance to figure out which registers are used
-    //         // for which values. Location zero is always the return value. You can ignore it if
-    //         // you don't want to return anything. Location 1 is the register for the first
-    //         // argument to the lazySlowPath(), i.e. &quot;input&quot;. Note that the Location object could
-    //         // also hold an FPR, if you are passing a double.
-    //         GPRReg outputGPR = locations[0].directGPR();
-    //         GPRReg inputGPR = locations[1].directGPR();
-    //         return LazySlowPath::createGenerator(
-    //             [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
-    //                 // Stage (3) is here. This is when you generate code. You have access to the
-    //                 // registers you collected in stage (2) because this lambda closes over those
-    //                 // variables (outputGPR and inputGPR). You also have access to whatever extra
-    //                 // data you collected in stage (1), such as the addend in this case.
-    //                 jit.add32(TrustedImm32(addend), inputGPR, outputGPR);
-    //                 // You have to end by jumping to done. There is nothing to fall through to.
-    //                 // You can also jump to the exception handler (see LazySlowPath.h for more
-    //                 // info). Note that currently you cannot OSR exit.
-    //                 params.doneJumps.append(jit.jump());
-    //             });
-    //     },
-    //     input);
-    //
-    // You can basically pass as many inputs as you like, either using this varargs form, or by
-    // passing a Vector of LValues.
-    //
-    // Note that if your slow path is only doing a call, you can use the createLazyCallGenerator()
-    // helper. For example:
-    //
-    // LValue input = ...;
-    // LValue output = lazySlowPath(
-    //     [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-    //         return createLazyCallGenerator(
-    //             operationDoThings, locations[0].directGPR(), locations[1].directGPR());
-    //     }, input);
-    //
-    // Finally, note that all of the lambdas - both the stage (2) lambda and the stage (3) lambda -
-    // run after the function that created them returns. Hence, you should not use by-reference
-    // capture (i.e. [&amp;]) in any of these lambdas.
-    template&lt;typename Functor, typename... ArgumentTypes&gt;
-    LValue lazySlowPath(const Functor&amp; functor, ArgumentTypes... arguments)
-    {
-        return lazySlowPath(functor, Vector&lt;LValue&gt;{ arguments... });
-    }
-
-    template&lt;typename Functor&gt;
-    LValue lazySlowPath(const Functor&amp; functor, const Vector&lt;LValue&gt;&amp; userArguments)
-    {
-        CodeOrigin origin = m_node-&gt;origin.semantic;
-        
-        PatchpointValue* result = m_out.patchpoint(B3::Int64);
-        for (LValue arg : userArguments)
-            result-&gt;append(ConstrainedValue(arg, B3::ValueRep::SomeRegister));
-
-        RefPtr&lt;PatchpointExceptionHandle&gt; exceptionHandle =
-            preparePatchpointForExceptions(result);
-        
-        result-&gt;clobber(RegisterSet::macroScratchRegisters());
-        State* state = &amp;m_ftlState;
-
-        result-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const StackmapGenerationParams&amp; params) {
-                Vector&lt;Location&gt; locations;
-                for (const B3::ValueRep&amp; rep : params)
-                    locations.append(Location::forValueRep(rep));
-
-                RefPtr&lt;LazySlowPath::Generator&gt; generator = functor(locations);
-                
-                CCallHelpers::PatchableJump patchableJump = jit.patchableJump();
-                CCallHelpers::Label done = jit.label();
-
-                RegisterSet usedRegisters = params.unavailableRegisters();
-
-                RefPtr&lt;ExceptionTarget&gt; exceptionTarget =
-                    exceptionHandle-&gt;scheduleExitCreation(params);
-
-                // FIXME: As part of handling exceptions, we need to create a concrete OSRExit here.
-                // Doing so should automagically register late paths that emit exit thunks.
-
-                params.addLatePath(
-                    [=] (CCallHelpers&amp; jit) {
-                        AllowMacroScratchRegisterUsage allowScratch(jit);
-                        patchableJump.m_jump.link(&amp;jit);
-                        unsigned index = state-&gt;jitCode-&gt;lazySlowPaths.size();
-                        state-&gt;jitCode-&gt;lazySlowPaths.append(nullptr);
-                        jit.pushToSaveImmediateWithoutTouchingRegisters(
-                            CCallHelpers::TrustedImm32(index));
-                        CCallHelpers::Jump generatorJump = jit.jump();
-
-                        // Note that so long as we're here, we don't really know if our late path
-                        // runs before or after any other late paths that we might depend on, like
-                        // the exception thunk.
-
-                        RefPtr&lt;JITCode&gt; jitCode = state-&gt;jitCode;
-                        VM* vm = &amp;state-&gt;graph.m_vm;
-
-                        jit.addLinkTask(
-                            [=] (LinkBuffer&amp; linkBuffer) {
-                                linkBuffer.link(
-                                    generatorJump, CodeLocationLabel(
-                                        vm-&gt;getCTIStub(
-                                            lazySlowPathGenerationThunkGenerator).code()));
-                                
-                                CodeLocationJump linkedPatchableJump = CodeLocationJump(
-                                    linkBuffer.locationOf(patchableJump));
-                                CodeLocationLabel linkedDone = linkBuffer.locationOf(done);
-
-                                CallSiteIndex callSiteIndex =
-                                    jitCode-&gt;common.addUniqueCallSiteIndex(origin);
-                                    
-                                std::unique_ptr&lt;LazySlowPath&gt; lazySlowPath =
-                                    std::make_unique&lt;LazySlowPath&gt;(
-                                        linkedPatchableJump, linkedDone,
-                                        exceptionTarget-&gt;label(linkBuffer), usedRegisters,
-                                        callSiteIndex, generator);
-                                    
-                                jitCode-&gt;lazySlowPaths[index] = WTFMove(lazySlowPath);
-                            });
-                    });
-            });
-        return result;
-    }
-    
-    void speculate(
-        ExitKind kind, FormattedValue lowValue, Node* highValue, LValue failCondition)
-    {
-        appendOSRExit(kind, lowValue, highValue, failCondition, m_origin);
-    }
-    
-    void terminate(ExitKind kind)
-    {
-        speculate(kind, noValue(), nullptr, m_out.booleanTrue);
-        didAlreadyTerminate();
-    }
-    
-    void didAlreadyTerminate()
-    {
-        m_state.setIsValid(false);
-    }
-    
-    void typeCheck(
-        FormattedValue lowValue, Edge highValue, SpeculatedType typesPassedThrough,
-        LValue failCondition, ExitKind exitKind = BadType)
-    {
-        appendTypeCheck(lowValue, highValue, typesPassedThrough, failCondition, exitKind);
-    }
-    
-    void appendTypeCheck(
-        FormattedValue lowValue, Edge highValue, SpeculatedType typesPassedThrough,
-        LValue failCondition, ExitKind exitKind)
-    {
-        if (!m_interpreter.needsTypeCheck(highValue, typesPassedThrough))
-            return;
-        ASSERT(mayHaveTypeCheck(highValue.useKind()));
-        appendOSRExit(exitKind, lowValue, highValue.node(), failCondition, m_origin);
-        m_interpreter.filter(highValue, typesPassedThrough);
-    }
-    
-    LValue lowInt32(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || (edge.useKind() == Int32Use || edge.useKind() == KnownInt32Use));
-        
-        if (edge-&gt;hasConstant()) {
-            JSValue value = edge-&gt;asJSValue();
-            if (!value.isInt32()) {
-                terminate(Uncountable);
-                return m_out.int32Zero;
-            }
-            return m_out.constInt32(value.asInt32());
-        }
-        
-        LoweredNodeValue value = m_int32Values.get(edge.node());
-        if (isValid(value))
-            return value.value();
-        
-        value = m_strictInt52Values.get(edge.node());
-        if (isValid(value))
-            return strictInt52ToInt32(edge, value.value());
-        
-        value = m_int52Values.get(edge.node());
-        if (isValid(value))
-            return strictInt52ToInt32(edge, int52ToStrictInt52(value.value()));
-        
-        value = m_jsValueValues.get(edge.node());
-        if (isValid(value)) {
-            LValue boxedResult = value.value();
-            FTL_TYPE_CHECK(
-                jsValueValue(boxedResult), edge, SpecInt32, isNotInt32(boxedResult));
-            LValue result = unboxInt32(boxedResult);
-            setInt32(edge.node(), result);
-            return result;
-        }
-
-        DFG_ASSERT(m_graph, m_node, !(provenType(edge) &amp; SpecInt32));
-        terminate(Uncountable);
-        return m_out.int32Zero;
-    }
-    
-    enum Int52Kind { StrictInt52, Int52 };
-    LValue lowInt52(Edge edge, Int52Kind kind)
-    {
-        DFG_ASSERT(m_graph, m_node, edge.useKind() == Int52RepUse);
-        
-        LoweredNodeValue value;
-        
-        switch (kind) {
-        case Int52:
-            value = m_int52Values.get(edge.node());
-            if (isValid(value))
-                return value.value();
-            
-            value = m_strictInt52Values.get(edge.node());
-            if (isValid(value))
-                return strictInt52ToInt52(value.value());
-            break;
-            
-        case StrictInt52:
-            value = m_strictInt52Values.get(edge.node());
-            if (isValid(value))
-                return value.value();
-            
-            value = m_int52Values.get(edge.node());
-            if (isValid(value))
-                return int52ToStrictInt52(value.value());
-            break;
-        }
-
-        DFG_ASSERT(m_graph, m_node, !provenType(edge));
-        terminate(Uncountable);
-        return m_out.int64Zero;
-    }
-    
-    LValue lowInt52(Edge edge)
-    {
-        return lowInt52(edge, Int52);
-    }
-    
-    LValue lowStrictInt52(Edge edge)
-    {
-        return lowInt52(edge, StrictInt52);
-    }
-    
-    bool betterUseStrictInt52(Node* node)
-    {
-        return !isValid(m_int52Values.get(node));
-    }
-    bool betterUseStrictInt52(Edge edge)
-    {
-        return betterUseStrictInt52(edge.node());
-    }
-    template&lt;typename T&gt;
-    Int52Kind bestInt52Kind(T node)
-    {
-        return betterUseStrictInt52(node) ? StrictInt52 : Int52;
-    }
-    Int52Kind opposite(Int52Kind kind)
-    {
-        switch (kind) {
-        case Int52:
-            return StrictInt52;
-        case StrictInt52:
-            return Int52;
-        }
-        DFG_CRASH(m_graph, m_node, &quot;Bad use kind&quot;);
-        return Int52;
-    }
-    
-    LValue lowWhicheverInt52(Edge edge, Int52Kind&amp; kind)
-    {
-        kind = bestInt52Kind(edge);
-        return lowInt52(edge, kind);
-    }
-    
-    LValue lowCell(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        DFG_ASSERT(m_graph, m_node, mode == ManualOperandSpeculation || DFG::isCell(edge.useKind()));
-        
-        if (edge-&gt;op() == JSConstant) {
-            JSValue value = edge-&gt;asJSValue();
-            if (!value.isCell()) {
-                terminate(Uncountable);
-                return m_out.intPtrZero;
-            }
-            return m_out.constIntPtr(value.asCell());
-        }
-        
-        LoweredNodeValue value = m_jsValueValues.get(edge.node());
-        if (isValid(value)) {
-            LValue uncheckedValue = value.value();
-            FTL_TYPE_CHECK(
-                jsValueValue(uncheckedValue), edge, SpecCell, isNotCell(uncheckedValue));
-            return uncheckedValue;
-        }
-        
-        DFG_ASSERT(m_graph, m_node, !(provenType(edge) &amp; SpecCell));
-        terminate(Uncountable);
-        return m_out.intPtrZero;
-    }
-    
-    LValue lowObject(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == ObjectUse);
-        
-        LValue result = lowCell(edge, mode);
-        speculateObject(edge, result);
-        return result;
-    }
-    
-    LValue lowString(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == StringUse || edge.useKind() == KnownStringUse || edge.useKind() == StringIdentUse);
-        
-        LValue result = lowCell(edge, mode);
-        speculateString(edge, result);
-        return result;
-    }
-    
-    LValue lowStringIdent(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == StringIdentUse);
-        
-        LValue string = lowString(edge, mode);
-        LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value);
-        speculateStringIdent(edge, string, stringImpl);
-        return stringImpl;
-    }
-
-    LValue lowSymbol(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == SymbolUse);
-
-        LValue result = lowCell(edge, mode);
-        speculateSymbol(edge, result);
-        return result;
-    }
-
-    LValue lowNonNullObject(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == ObjectUse);
-        
-        LValue result = lowCell(edge, mode);
-        speculateNonNullObject(edge, result);
-        return result;
-    }
-    
-    LValue lowBoolean(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        ASSERT_UNUSED(mode, mode == ManualOperandSpeculation || edge.useKind() == BooleanUse || edge.useKind() == KnownBooleanUse);
-        
-        if (edge-&gt;hasConstant()) {
-            JSValue value = edge-&gt;asJSValue();
-            if (!value.isBoolean()) {
-                terminate(Uncountable);
-                return m_out.booleanFalse;
-            }
-            return m_out.constBool(value.asBoolean());
-        }
-        
-        LoweredNodeValue value = m_booleanValues.get(edge.node());
-        if (isValid(value))
-            return value.value();
-        
-        value = m_jsValueValues.get(edge.node());
-        if (isValid(value)) {
-            LValue unboxedResult = value.value();
-            FTL_TYPE_CHECK(
-                jsValueValue(unboxedResult), edge, SpecBoolean, isNotBoolean(unboxedResult));
-            LValue result = unboxBoolean(unboxedResult);
-            setBoolean(edge.node(), result);
-            return result;
-        }
-        
-        DFG_ASSERT(m_graph, m_node, !(provenType(edge) &amp; SpecBoolean));
-        terminate(Uncountable);
-        return m_out.booleanFalse;
-    }
-    
-    LValue lowDouble(Edge edge)
-    {
-        DFG_ASSERT(m_graph, m_node, isDouble(edge.useKind()));
-        
-        LoweredNodeValue value = m_doubleValues.get(edge.node());
-        if (isValid(value))
-            return value.value();
-        DFG_ASSERT(m_graph, m_node, !provenType(edge));
-        terminate(Uncountable);
-        return m_out.doubleZero;
-    }
-    
-    LValue lowJSValue(Edge edge, OperandSpeculationMode mode = AutomaticOperandSpeculation)
-    {
-        DFG_ASSERT(m_graph, m_node, mode == ManualOperandSpeculation || edge.useKind() == UntypedUse);
-        DFG_ASSERT(m_graph, m_node, !isDouble(edge.useKind()));
-        DFG_ASSERT(m_graph, m_node, edge.useKind() != Int52RepUse);
-        
-        if (edge-&gt;hasConstant())
-            return m_out.constInt64(JSValue::encode(edge-&gt;asJSValue()));
-
-        LoweredNodeValue value = m_jsValueValues.get(edge.node());
-        if (isValid(value))
-            return value.value();
-        
-        value = m_int32Values.get(edge.node());
-        if (isValid(value)) {
-            LValue result = boxInt32(value.value());
-            setJSValue(edge.node(), result);
-            return result;
-        }
-        
-        value = m_booleanValues.get(edge.node());
-        if (isValid(value)) {
-            LValue result = boxBoolean(value.value());
-            setJSValue(edge.node(), result);
-            return result;
-        }
-        
-        DFG_CRASH(m_graph, m_node, &quot;Value not defined&quot;);
-        return 0;
-    }
-    
-    LValue lowStorage(Edge edge)
-    {
-        LoweredNodeValue value = m_storageValues.get(edge.node());
-        if (isValid(value))
-            return value.value();
-        
-        LValue result = lowCell(edge);
-        setStorage(edge.node(), result);
-        return result;
-    }
-    
-    LValue strictInt52ToInt32(Edge edge, LValue value)
-    {
-        LValue result = m_out.castToInt32(value);
-        FTL_TYPE_CHECK(
-            noValue(), edge, SpecInt32,
-            m_out.notEqual(m_out.signExt32To64(result), value));
-        setInt32(edge.node(), result);
-        return result;
-    }
-    
-    LValue strictInt52ToDouble(LValue value)
-    {
-        return m_out.intToDouble(value);
-    }
-    
-    LValue strictInt52ToJSValue(LValue value)
-    {
-        LBasicBlock isInt32 = FTL_NEW_BLOCK(m_out, (&quot;strictInt52ToJSValue isInt32 case&quot;));
-        LBasicBlock isDouble = FTL_NEW_BLOCK(m_out, (&quot;strictInt52ToJSValue isDouble case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;strictInt52ToJSValue continuation&quot;));
-        
-        Vector&lt;ValueFromBlock, 2&gt; results;
-            
-        LValue int32Value = m_out.castToInt32(value);
-        m_out.branch(
-            m_out.equal(m_out.signExt32To64(int32Value), value),
-            unsure(isInt32), unsure(isDouble));
-        
-        LBasicBlock lastNext = m_out.appendTo(isInt32, isDouble);
-        
-        results.append(m_out.anchor(boxInt32(int32Value)));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(isDouble, continuation);
-        
-        results.append(m_out.anchor(boxDouble(m_out.intToDouble(value))));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(m_out.int64, results);
-    }
-    
-    LValue strictInt52ToInt52(LValue value)
-    {
-        return m_out.shl(value, m_out.constInt64(JSValue::int52ShiftAmount));
-    }
-    
-    LValue int52ToStrictInt52(LValue value)
-    {
-        return m_out.aShr(value, m_out.constInt64(JSValue::int52ShiftAmount));
-    }
-    
-    LValue isInt32(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecInt32))
-            return proven;
-        return m_out.aboveOrEqual(jsValue, m_tagTypeNumber);
-    }
-    LValue isNotInt32(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~SpecInt32))
-            return proven;
-        return m_out.below(jsValue, m_tagTypeNumber);
-    }
-    LValue unboxInt32(LValue jsValue)
-    {
-        return m_out.castToInt32(jsValue);
-    }
-    LValue boxInt32(LValue value)
-    {
-        return m_out.add(m_out.zeroExt(value, m_out.int64), m_tagTypeNumber);
-    }
-    
-    LValue isCellOrMisc(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecCell | SpecMisc))
-            return proven;
-        return m_out.testIsZero64(jsValue, m_tagTypeNumber);
-    }
-    LValue isNotCellOrMisc(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~(SpecCell | SpecMisc)))
-            return proven;
-        return m_out.testNonZero64(jsValue, m_tagTypeNumber);
-    }
-    
-    LValue unboxDouble(LValue jsValue)
-    {
-        return m_out.bitCast(m_out.add(jsValue, m_tagTypeNumber), m_out.doubleType);
-    }
-    LValue boxDouble(LValue doubleValue)
-    {
-        return m_out.sub(m_out.bitCast(doubleValue, m_out.int64), m_tagTypeNumber);
-    }
-    
-    LValue jsValueToStrictInt52(Edge edge, LValue boxedValue)
-    {
-        LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToInt52 unboxing int case&quot;));
-        LBasicBlock doubleCase = FTL_NEW_BLOCK(m_out, (&quot;jsValueToInt52 unboxing double case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;jsValueToInt52 unboxing continuation&quot;));
-            
-        LValue isNotInt32;
-        if (!m_interpreter.needsTypeCheck(edge, SpecInt32))
-            isNotInt32 = m_out.booleanFalse;
-        else if (!m_interpreter.needsTypeCheck(edge, ~SpecInt32))
-            isNotInt32 = m_out.booleanTrue;
-        else
-            isNotInt32 = this-&gt;isNotInt32(boxedValue);
-        m_out.branch(isNotInt32, unsure(doubleCase), unsure(intCase));
-            
-        LBasicBlock lastNext = m_out.appendTo(intCase, doubleCase);
-            
-        ValueFromBlock intToInt52 = m_out.anchor(
-            m_out.signExt32To64(unboxInt32(boxedValue)));
-        m_out.jump(continuation);
-            
-        m_out.appendTo(doubleCase, continuation);
-        
-        LValue possibleResult = m_out.call(
-            m_out.int64, m_out.operation(operationConvertBoxedDoubleToInt52), boxedValue);
-        FTL_TYPE_CHECK(
-            jsValueValue(boxedValue), edge, SpecInt32 | SpecInt52AsDouble,
-            m_out.equal(possibleResult, m_out.constInt64(JSValue::notInt52)));
-            
-        ValueFromBlock doubleToInt52 = m_out.anchor(possibleResult);
-        m_out.jump(continuation);
-            
-        m_out.appendTo(continuation, lastNext);
-            
-        return m_out.phi(m_out.int64, intToInt52, doubleToInt52);
-    }
-    
-    LValue doubleToStrictInt52(Edge edge, LValue value)
-    {
-        LValue possibleResult = m_out.call(
-            m_out.int64, m_out.operation(operationConvertDoubleToInt52), value);
-        FTL_TYPE_CHECK_WITH_EXIT_KIND(Int52Overflow,
-            doubleValue(value), edge, SpecInt52AsDouble,
-            m_out.equal(possibleResult, m_out.constInt64(JSValue::notInt52)));
-        
-        return possibleResult;
-    }
-
-    LValue convertDoubleToInt32(LValue value, bool shouldCheckNegativeZero)
-    {
-        LValue integerValue = m_out.doubleToInt(value);
-        LValue integerValueConvertedToDouble = m_out.intToDouble(integerValue);
-        LValue valueNotConvertibleToInteger = m_out.doubleNotEqualOrUnordered(value, integerValueConvertedToDouble);
-        speculate(Overflow, FormattedValue(DataFormatDouble, value), m_node, valueNotConvertibleToInteger);
-
-        if (shouldCheckNegativeZero) {
-            LBasicBlock valueIsZero = FTL_NEW_BLOCK(m_out, (&quot;ConvertDoubleToInt32 on zero&quot;));
-            LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;ConvertDoubleToInt32 continuation&quot;));
-            m_out.branch(m_out.isZero32(integerValue), unsure(valueIsZero), unsure(continuation));
-
-            LBasicBlock lastNext = m_out.appendTo(valueIsZero, continuation);
-
-            LValue doubleBitcastToInt64 = m_out.bitCast(value, m_out.int64);
-            LValue signBitSet = m_out.lessThan(doubleBitcastToInt64, m_out.constInt64(0));
-
-            speculate(NegativeZero, FormattedValue(DataFormatDouble, value), m_node, signBitSet);
-            m_out.jump(continuation);
-            m_out.appendTo(continuation, lastNext);
-        }
-        return integerValue;
-    }
-
-    LValue isNumber(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecFullNumber))
-            return proven;
-        return isNotCellOrMisc(jsValue);
-    }
-    LValue isNotNumber(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~SpecFullNumber))
-            return proven;
-        return isCellOrMisc(jsValue);
-    }
-    
-    LValue isNotCell(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~SpecCell))
-            return proven;
-        return m_out.testNonZero64(jsValue, m_tagMask);
-    }
-    
-    LValue isCell(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecCell))
-            return proven;
-        return m_out.testIsZero64(jsValue, m_tagMask);
-    }
-    
-    LValue isNotMisc(LValue value, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~SpecMisc))
-            return proven;
-        return m_out.above(value, m_out.constInt64(TagBitTypeOther | TagBitBool | TagBitUndefined));
-    }
-    
-    LValue isMisc(LValue value, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecMisc))
-            return proven;
-        return m_out.logicalNot(isNotMisc(value));
-    }
-    
-    LValue isNotBoolean(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~SpecBoolean))
-            return proven;
-        return m_out.testNonZero64(
-            m_out.bitXor(jsValue, m_out.constInt64(ValueFalse)),
-            m_out.constInt64(~1));
-    }
-    LValue isBoolean(LValue jsValue, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecBoolean))
-            return proven;
-        return m_out.logicalNot(isNotBoolean(jsValue));
-    }
-    LValue unboxBoolean(LValue jsValue)
-    {
-        // We want to use a cast that guarantees that LLVM knows that even the integer
-        // value is just 0 or 1. But for now we do it the dumb way.
-        return m_out.notZero64(m_out.bitAnd(jsValue, m_out.constInt64(1)));
-    }
-    LValue boxBoolean(LValue value)
-    {
-        return m_out.select(
-            value, m_out.constInt64(ValueTrue), m_out.constInt64(ValueFalse));
-    }
-    
-    LValue isNotOther(LValue value, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, ~SpecOther))
-            return proven;
-        return m_out.notEqual(
-            m_out.bitAnd(value, m_out.constInt64(~TagBitUndefined)),
-            m_out.constInt64(ValueNull));
-    }
-    LValue isOther(LValue value, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type, SpecOther))
-            return proven;
-        return m_out.equal(
-            m_out.bitAnd(value, m_out.constInt64(~TagBitUndefined)),
-            m_out.constInt64(ValueNull));
-    }
-    
-    LValue isProvenValue(SpeculatedType provenType, SpeculatedType wantedType)
-    {
-        if (!(provenType &amp; ~wantedType))
-            return m_out.booleanTrue;
-        if (!(provenType &amp; wantedType))
-            return m_out.booleanFalse;
-        return nullptr;
-    }
-
-    void speculate(Edge edge)
-    {
-        switch (edge.useKind()) {
-        case UntypedUse:
-            break;
-        case KnownInt32Use:
-        case KnownStringUse:
-        case KnownPrimitiveUse:
-        case DoubleRepUse:
-        case Int52RepUse:
-            ASSERT(!m_interpreter.needsTypeCheck(edge));
-            break;
-        case Int32Use:
-            speculateInt32(edge);
-            break;
-        case CellUse:
-            speculateCell(edge);
-            break;
-        case CellOrOtherUse:
-            speculateCellOrOther(edge);
-            break;
-        case KnownCellUse:
-            ASSERT(!m_interpreter.needsTypeCheck(edge));
-            break;
-        case MachineIntUse:
-            speculateMachineInt(edge);
-            break;
-        case ObjectUse:
-            speculateObject(edge);
-            break;
-        case FunctionUse:
-            speculateFunction(edge);
-            break;
-        case ObjectOrOtherUse:
-            speculateObjectOrOther(edge);
-            break;
-        case FinalObjectUse:
-            speculateFinalObject(edge);
-            break;
-        case StringUse:
-            speculateString(edge);
-            break;
-        case StringIdentUse:
-            speculateStringIdent(edge);
-            break;
-        case SymbolUse:
-            speculateSymbol(edge);
-            break;
-        case StringObjectUse:
-            speculateStringObject(edge);
-            break;
-        case StringOrStringObjectUse:
-            speculateStringOrStringObject(edge);
-            break;
-        case NumberUse:
-            speculateNumber(edge);
-            break;
-        case RealNumberUse:
-            speculateRealNumber(edge);
-            break;
-        case DoubleRepRealUse:
-            speculateDoubleRepReal(edge);
-            break;
-        case DoubleRepMachineIntUse:
-            speculateDoubleRepMachineInt(edge);
-            break;
-        case BooleanUse:
-            speculateBoolean(edge);
-            break;
-        case NotStringVarUse:
-            speculateNotStringVar(edge);
-            break;
-        case NotCellUse:
-            speculateNotCell(edge);
-            break;
-        case OtherUse:
-            speculateOther(edge);
-            break;
-        case MiscUse:
-            speculateMisc(edge);
-            break;
-        default:
-            DFG_CRASH(m_graph, m_node, &quot;Unsupported speculation use kind&quot;);
-        }
-    }
-    
-    void speculate(Node*, Edge edge)
-    {
-        speculate(edge);
-    }
-    
-    void speculateInt32(Edge edge)
-    {
-        lowInt32(edge);
-    }
-    
-    void speculateCell(Edge edge)
-    {
-        lowCell(edge);
-    }
-    
-    void speculateCellOrOther(Edge edge)
-    {
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-
-        LBasicBlock isNotCell = FTL_NEW_BLOCK(m_out, (&quot;Speculate CellOrOther not cell&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Speculate CellOrOther continuation&quot;));
-
-        m_out.branch(isCell(value, provenType(edge)), unsure(continuation), unsure(isNotCell));
-
-        LBasicBlock lastNext = m_out.appendTo(isNotCell, continuation);
-        FTL_TYPE_CHECK(jsValueValue(value), edge, SpecCell | SpecOther, isNotOther(value));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void speculateMachineInt(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        jsValueToStrictInt52(edge, lowJSValue(edge, ManualOperandSpeculation));
-    }
-    
-    LValue isObject(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, SpecObject))
-            return proven;
-        return m_out.aboveOrEqual(
-            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
-            m_out.constInt32(ObjectType));
-    }
-
-    LValue isNotObject(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecObject))
-            return proven;
-        return m_out.below(
-            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
-            m_out.constInt32(ObjectType));
-    }
-
-    LValue isNotString(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecString))
-            return proven;
-        return m_out.notEqual(
-            m_out.load32(cell, m_heaps.JSCell_structureID),
-            m_out.constInt32(vm().stringStructure-&gt;id()));
-    }
-    
-    LValue isString(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, SpecString))
-            return proven;
-        return m_out.equal(
-            m_out.load32(cell, m_heaps.JSCell_structureID),
-            m_out.constInt32(vm().stringStructure-&gt;id()));
-    }
-
-    LValue isNotSymbol(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecSymbol))
-            return proven;
-        return m_out.notEqual(
-            m_out.load32(cell, m_heaps.JSCell_structureID),
-            m_out.constInt32(vm().symbolStructure-&gt;id()));
-    }
-
-    LValue isArrayType(LValue cell, ArrayMode arrayMode)
-    {
-        switch (arrayMode.type()) {
-        case Array::Int32:
-        case Array::Double:
-        case Array::Contiguous: {
-            LValue indexingType = m_out.load8ZeroExt32(cell, m_heaps.JSCell_indexingType);
-            
-            switch (arrayMode.arrayClass()) {
-            case Array::OriginalArray:
-                DFG_CRASH(m_graph, m_node, &quot;Unexpected original array&quot;);
-                return 0;
-                
-            case Array::Array:
-                return m_out.equal(
-                    m_out.bitAnd(indexingType, m_out.constInt32(IsArray | IndexingShapeMask)),
-                    m_out.constInt32(IsArray | arrayMode.shapeMask()));
-                
-            case Array::NonArray:
-            case Array::OriginalNonArray:
-                return m_out.equal(
-                    m_out.bitAnd(indexingType, m_out.constInt32(IsArray | IndexingShapeMask)),
-                    m_out.constInt32(arrayMode.shapeMask()));
-                
-            case Array::PossiblyArray:
-                return m_out.equal(
-                    m_out.bitAnd(indexingType, m_out.constInt32(IndexingShapeMask)),
-                    m_out.constInt32(arrayMode.shapeMask()));
-            }
-            
-            DFG_CRASH(m_graph, m_node, &quot;Corrupt array class&quot;);
-        }
-            
-        case Array::DirectArguments:
-            return m_out.equal(
-                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
-                m_out.constInt32(DirectArgumentsType));
-            
-        case Array::ScopedArguments:
-            return m_out.equal(
-                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
-                m_out.constInt32(ScopedArgumentsType));
-            
-        default:
-            return m_out.equal(
-                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
-                m_out.constInt32(typeForTypedArrayType(arrayMode.typedArrayType())));
-        }
-    }
-    
-    LValue isFunction(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, SpecFunction))
-            return proven;
-        return isType(cell, JSFunctionType);
-    }
-    LValue isNotFunction(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (LValue proven = isProvenValue(type &amp; SpecCell, ~SpecFunction))
-            return proven;
-        return isNotType(cell, JSFunctionType);
-    }
-            
-    LValue isExoticForTypeof(LValue cell, SpeculatedType type = SpecFullTop)
-    {
-        if (!(type &amp; SpecObjectOther))
-            return m_out.booleanFalse;
-        return m_out.testNonZero32(
-            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoFlags),
-            m_out.constInt32(MasqueradesAsUndefined | TypeOfShouldCallGetCallData));
-    }
-    
-    LValue isType(LValue cell, JSType type)
-    {
-        return m_out.equal(
-            m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoType),
-            m_out.constInt32(type));
-    }
-    
-    LValue isNotType(LValue cell, JSType type)
-    {
-        return m_out.logicalNot(isType(cell, type));
-    }
-    
-    void speculateObject(Edge edge, LValue cell)
-    {
-        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecObject, isNotObject(cell));
-    }
-    
-    void speculateObject(Edge edge)
-    {
-        speculateObject(edge, lowCell(edge));
-    }
-    
-    void speculateFunction(Edge edge, LValue cell)
-    {
-        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecFunction, isNotFunction(cell));
-    }
-    
-    void speculateFunction(Edge edge)
-    {
-        speculateFunction(edge, lowCell(edge));
-    }
-    
-    void speculateObjectOrOther(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        
-        LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, (&quot;speculateObjectOrOther cell case&quot;));
-        LBasicBlock primitiveCase = FTL_NEW_BLOCK(m_out, (&quot;speculateObjectOrOther primitive case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;speculateObjectOrOther continuation&quot;));
-        
-        m_out.branch(isNotCell(value, provenType(edge)), unsure(primitiveCase), unsure(cellCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(cellCase, primitiveCase);
-        
-        FTL_TYPE_CHECK(
-            jsValueValue(value), edge, (~SpecCell) | SpecObject, isNotObject(value));
-        
-        m_out.jump(continuation);
-        
-        m_out.appendTo(primitiveCase, continuation);
-        
-        FTL_TYPE_CHECK(
-            jsValueValue(value), edge, SpecCell | SpecOther, isNotOther(value));
-        
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void speculateFinalObject(Edge edge, LValue cell)
-    {
-        FTL_TYPE_CHECK(
-            jsValueValue(cell), edge, SpecFinalObject, isNotType(cell, FinalObjectType));
-    }
-    
-    void speculateFinalObject(Edge edge)
-    {
-        speculateFinalObject(edge, lowCell(edge));
-    }
-    
-    void speculateString(Edge edge, LValue cell)
-    {
-        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecString | ~SpecCell, isNotString(cell));
-    }
-    
-    void speculateString(Edge edge)
-    {
-        speculateString(edge, lowCell(edge));
-    }
-    
-    void speculateStringIdent(Edge edge, LValue string, LValue stringImpl)
-    {
-        if (!m_interpreter.needsTypeCheck(edge, SpecStringIdent | ~SpecString))
-            return;
-        
-        speculate(BadType, jsValueValue(string), edge.node(), m_out.isNull(stringImpl));
-        speculate(
-            BadType, jsValueValue(string), edge.node(),
-            m_out.testIsZero32(
-                m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags),
-                m_out.constInt32(StringImpl::flagIsAtomic())));
-        m_interpreter.filter(edge, SpecStringIdent | ~SpecString);
-    }
-    
-    void speculateStringIdent(Edge edge)
-    {
-        lowStringIdent(edge);
-    }
-    
-    void speculateStringObject(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge, SpecStringObject))
-            return;
-        
-        speculateStringObjectForCell(edge, lowCell(edge));
-        m_interpreter.filter(edge, SpecStringObject);
-    }
-    
-    void speculateStringOrStringObject(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge, SpecString | SpecStringObject))
-            return;
-        
-        LBasicBlock notString = FTL_NEW_BLOCK(m_out, (&quot;Speculate StringOrStringObject not string case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Speculate StringOrStringObject continuation&quot;));
-        
-        LValue structureID = m_out.load32(lowCell(edge), m_heaps.JSCell_structureID);
-        m_out.branch(
-            m_out.equal(structureID, m_out.constInt32(vm().stringStructure-&gt;id())),
-            unsure(continuation), unsure(notString));
-        
-        LBasicBlock lastNext = m_out.appendTo(notString, continuation);
-        speculateStringObjectForStructureID(edge, structureID);
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-        
-        m_interpreter.filter(edge, SpecString | SpecStringObject);
-    }
-    
-    void speculateStringObjectForCell(Edge edge, LValue cell)
-    {
-        speculateStringObjectForStructureID(edge, m_out.load32(cell, m_heaps.JSCell_structureID));
-    }
-    
-    void speculateStringObjectForStructureID(Edge edge, LValue structureID)
-    {
-        Structure* stringObjectStructure =
-            m_graph.globalObjectFor(m_node-&gt;origin.semantic)-&gt;stringObjectStructure();
-        
-        if (abstractStructure(edge).isSubsetOf(StructureSet(stringObjectStructure)))
-            return;
-        
-        speculate(
-            NotStringObject, noValue(), 0,
-            m_out.notEqual(structureID, weakStructureID(stringObjectStructure)));
-    }
-
-    void speculateSymbol(Edge edge, LValue cell)
-    {
-        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecSymbol | ~SpecCell, isNotSymbol(cell));
-    }
-
-    void speculateSymbol(Edge edge)
-    {
-        speculateSymbol(edge, lowCell(edge));
-    }
-
-    void speculateNonNullObject(Edge edge, LValue cell)
-    {
-        FTL_TYPE_CHECK(jsValueValue(cell), edge, SpecObject, isNotObject(cell));
-        if (masqueradesAsUndefinedWatchpointIsStillValid())
-            return;
-        
-        speculate(
-            BadType, jsValueValue(cell), edge.node(),
-            m_out.testNonZero32(
-                m_out.load8ZeroExt32(cell, m_heaps.JSCell_typeInfoFlags),
-                m_out.constInt32(MasqueradesAsUndefined)));
-    }
-    
-    void speculateNumber(Edge edge)
-    {
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        FTL_TYPE_CHECK(jsValueValue(value), edge, SpecBytecodeNumber, isNotNumber(value));
-    }
-    
-    void speculateRealNumber(Edge edge)
-    {
-        // Do an early return here because lowDouble() can create a lot of control flow.
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        LValue doubleValue = unboxDouble(value);
-        
-        LBasicBlock intCase = FTL_NEW_BLOCK(m_out, (&quot;speculateRealNumber int case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;speculateRealNumber continuation&quot;));
-        
-        m_out.branch(
-            m_out.doubleEqual(doubleValue, doubleValue),
-            usually(continuation), rarely(intCase));
-        
-        LBasicBlock lastNext = m_out.appendTo(intCase, continuation);
-        
-        typeCheck(
-            jsValueValue(value), m_node-&gt;child1(), SpecBytecodeRealNumber,
-            isNotInt32(value, provenType(m_node-&gt;child1()) &amp; ~SpecFullDouble));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void speculateDoubleRepReal(Edge edge)
-    {
-        // Do an early return here because lowDouble() can create a lot of control flow.
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        LValue value = lowDouble(edge);
-        FTL_TYPE_CHECK(
-            doubleValue(value), edge, SpecDoubleReal,
-            m_out.doubleNotEqualOrUnordered(value, value));
-    }
-    
-    void speculateDoubleRepMachineInt(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        doubleToStrictInt52(edge, lowDouble(edge));
-    }
-    
-    void speculateBoolean(Edge edge)
-    {
-        lowBoolean(edge);
-    }
-    
-    void speculateNotStringVar(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge, ~SpecStringVar))
-            return;
-        
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        
-        LBasicBlock isCellCase = FTL_NEW_BLOCK(m_out, (&quot;Speculate NotStringVar is cell case&quot;));
-        LBasicBlock isStringCase = FTL_NEW_BLOCK(m_out, (&quot;Speculate NotStringVar is string case&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Speculate NotStringVar continuation&quot;));
-        
-        m_out.branch(isCell(value, provenType(edge)), unsure(isCellCase), unsure(continuation));
-        
-        LBasicBlock lastNext = m_out.appendTo(isCellCase, isStringCase);
-        m_out.branch(isString(value, provenType(edge)), unsure(isStringCase), unsure(continuation));
-        
-        m_out.appendTo(isStringCase, continuation);
-        speculateStringIdent(edge, value, m_out.loadPtr(value, m_heaps.JSString_value));
-        m_out.jump(continuation);
-        
-        m_out.appendTo(continuation, lastNext);
-    }
-    
-    void speculateNotCell(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        typeCheck(jsValueValue(value), edge, ~SpecCell, isCell(value));
-    }
-    
-    void speculateOther(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        typeCheck(jsValueValue(value), edge, SpecOther, isNotOther(value));
-    }
-    
-    void speculateMisc(Edge edge)
-    {
-        if (!m_interpreter.needsTypeCheck(edge))
-            return;
-        
-        LValue value = lowJSValue(edge, ManualOperandSpeculation);
-        typeCheck(jsValueValue(value), edge, SpecMisc, isNotMisc(value));
-    }
-    
-    bool masqueradesAsUndefinedWatchpointIsStillValid()
-    {
-        return m_graph.masqueradesAsUndefinedWatchpointIsStillValid(m_node-&gt;origin.semantic);
-    }
-    
-    LValue loadCellState(LValue base)
-    {
-        return m_out.load8ZeroExt32(base, m_heaps.JSCell_cellState);
-    }
-
-    void emitStoreBarrier(LValue base)
-    {
-        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;Store barrier slow path&quot;));
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Store barrier continuation&quot;));
-
-        m_out.branch(
-            m_out.notZero32(loadCellState(base)), usually(continuation), rarely(slowPath));
-
-        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
-
-        // We emit the store barrier slow path lazily. In a lot of cases, this will never fire. And
-        // when it does fire, it makes sense for us to generate this code using our JIT rather than
-        // wasting LLVM's time optimizing it.
-        lazySlowPath(
-            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
-                GPRReg baseGPR = locations[1].directGPR();
-
-                return LazySlowPath::createGenerator(
-                    [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
-                        RegisterSet usedRegisters = params.lazySlowPath-&gt;usedRegisters();
-                        ScratchRegisterAllocator scratchRegisterAllocator(usedRegisters);
-                        scratchRegisterAllocator.lock(baseGPR);
-
-                        GPRReg scratch1 = scratchRegisterAllocator.allocateScratchGPR();
-                        GPRReg scratch2 = scratchRegisterAllocator.allocateScratchGPR();
-
-                        ScratchRegisterAllocator::PreservedState preservedState =
-                            scratchRegisterAllocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
-
-                        // We've already saved these, so when we make a slow path call, we don't have
-                        // to save them again.
-                        usedRegisters.exclude(RegisterSet(scratch1, scratch2));
-
-                        WriteBarrierBuffer&amp; writeBarrierBuffer = jit.vm()-&gt;heap.writeBarrierBuffer();
-                        jit.load32(writeBarrierBuffer.currentIndexAddress(), scratch2);
-                        CCallHelpers::Jump needToFlush = jit.branch32(
-                            CCallHelpers::AboveOrEqual, scratch2,
-                            CCallHelpers::TrustedImm32(writeBarrierBuffer.capacity()));
-
-                        jit.add32(CCallHelpers::TrustedImm32(1), scratch2);
-                        jit.store32(scratch2, writeBarrierBuffer.currentIndexAddress());
-
-                        jit.move(CCallHelpers::TrustedImmPtr(writeBarrierBuffer.buffer()), scratch1);
-                        jit.storePtr(
-                            baseGPR,
-                            CCallHelpers::BaseIndex(
-                                scratch1, scratch2, CCallHelpers::ScalePtr,
-                                static_cast&lt;int32_t&gt;(-sizeof(void*))));
-
-                        scratchRegisterAllocator.restoreReusedRegistersByPopping(jit, preservedState);
-
-                        params.doneJumps.append(jit.jump());
-
-                        needToFlush.link(&amp;jit);
-                        callOperation(
-                            usedRegisters, jit, params.lazySlowPath-&gt;callSiteIndex(),
-                            params.exceptionJumps, operationFlushWriteBarrierBuffer, InvalidGPRReg,
-                            baseGPR);
-                        scratchRegisterAllocator.restoreReusedRegistersByPopping(jit, preservedState);
-                        params.doneJumps.append(jit.jump());
-                    });
-            },
-            base);
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-    }
-
-    template&lt;typename... Args&gt;
-    LValue vmCall(LType type, LValue function, Args... args)
-    {
-        callPreflight();
-        LValue result = m_out.call(type, function, args...);
-        callCheck();
-        return result;
-    }
-    
-    void callPreflight(CodeOrigin codeOrigin)
-    {
-        CallSiteIndex callSiteIndex = m_ftlState.jitCode-&gt;common.addCodeOrigin(codeOrigin);
-        m_out.store32(
-            m_out.constInt32(callSiteIndex.bits()),
-            tagFor(JSStack::ArgumentCount));
-    }
-
-    void callPreflight()
-    {
-        callPreflight(codeOriginDescriptionOfCallSite());
-    }
-
-    CodeOrigin codeOriginDescriptionOfCallSite() const
-    {
-        CodeOrigin codeOrigin = m_node-&gt;origin.semantic;
-        if (m_node-&gt;op() == TailCallInlinedCaller
-            || m_node-&gt;op() == TailCallVarargsInlinedCaller
-            || m_node-&gt;op() == TailCallForwardVarargsInlinedCaller) {
-            // This case arises when you have a situation like this:
-            // foo makes a call to bar, bar is inlined in foo. bar makes a call
-            // to baz and baz is inlined in bar. And then baz makes a tail-call to jaz,
-            // and jaz is inlined in baz. We want the callframe for jaz to appear to 
-            // have caller be bar.
-            codeOrigin = *codeOrigin.inlineCallFrame-&gt;getCallerSkippingTailCalls();
-        }
-
-        return codeOrigin;
-    }
-    
-    void callCheck()
-    {
-        if (Options::useExceptionFuzz())
-            m_out.call(m_out.voidType, m_out.operation(operationExceptionFuzz), m_callFrame);
-        
-        LValue exception = m_out.load64(m_out.absolute(vm().addressOfException()));
-        LValue hadException = m_out.notZero64(exception);
-
-        CodeOrigin opCatchOrigin;
-        HandlerInfo* exceptionHandler;
-        if (m_graph.willCatchExceptionInMachineFrame(m_origin.forExit, opCatchOrigin, exceptionHandler)) {
-            bool exitOK = true;
-            bool isExceptionHandler = true;
-            appendOSRExit(
-                ExceptionCheck, noValue(), nullptr, hadException,
-                m_origin.withForExitAndExitOK(opCatchOrigin, exitOK), isExceptionHandler);
-            return;
-        }
-
-        LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Exception check continuation&quot;));
-
-        m_out.branch(
-            hadException, rarely(m_handleExceptions), usually(continuation));
-
-        m_out.appendTo(continuation);
-    }
-
-    RefPtr&lt;PatchpointExceptionHandle&gt; preparePatchpointForExceptions(PatchpointValue* value)
-    {
-        CodeOrigin opCatchOrigin;
-        HandlerInfo* exceptionHandler;
-        bool willCatchException = m_graph.willCatchExceptionInMachineFrame(m_origin.forExit, opCatchOrigin, exceptionHandler);
-        if (!willCatchException)
-            return PatchpointExceptionHandle::defaultHandle(m_ftlState);
-
-        if (verboseCompilationEnabled()) {
-            dataLog(&quot;    Patchpoint exception OSR exit #&quot;, m_ftlState.jitCode-&gt;osrExitDescriptors.size(), &quot; with availability: &quot;, availabilityMap(), &quot;\n&quot;);
-            if (!m_availableRecoveries.isEmpty())
-                dataLog(&quot;        Available recoveries: &quot;, listDump(m_availableRecoveries), &quot;\n&quot;);
-        }
-
-        bool exitOK = true;
-        NodeOrigin origin = m_origin.withForExitAndExitOK(opCatchOrigin, exitOK);
-
-        OSRExitDescriptor* exitDescriptor = appendOSRExitDescriptor(noValue(), nullptr);
-
-        // Compute the offset into the StackmapGenerationParams where we will find the exit arguments
-        // we are about to append. We need to account for both the children we've already added, and
-        // for the possibility of a result value if the patchpoint is not void.
-        unsigned offset = value-&gt;numChildren();
-        if (value-&gt;type() != Void)
-            offset++;
-
-        // Use LateColdAny to ensure that the stackmap arguments interfere with the patchpoint's
-        // result and with any late-clobbered registers.
-        value-&gt;appendVectorWithRep(
-            buildExitArguments(exitDescriptor, opCatchOrigin, noValue()),
-            ValueRep::LateColdAny);
-
-        return PatchpointExceptionHandle::create(
-            m_ftlState, exitDescriptor, origin, offset, *exceptionHandler);
-    }
-
-    LBasicBlock lowBlock(DFG::BasicBlock* block)
-    {
-        return m_blocks.get(block);
-    }
-
-    OSRExitDescriptor* appendOSRExitDescriptor(FormattedValue lowValue, Node* highValue)
-    {
-        return &amp;m_ftlState.jitCode-&gt;osrExitDescriptors.alloc(
-            lowValue.format(), m_graph.methodOfGettingAValueProfileFor(highValue),
-            availabilityMap().m_locals.numberOfArguments(),
-            availabilityMap().m_locals.numberOfLocals());
-    }
-    
-    void appendOSRExit(
-        ExitKind kind, FormattedValue lowValue, Node* highValue, LValue failCondition, 
-        NodeOrigin origin, bool isExceptionHandler = false)
-    {
-        if (verboseCompilationEnabled()) {
-            dataLog(&quot;    OSR exit #&quot;, m_ftlState.jitCode-&gt;osrExitDescriptors.size(), &quot; with availability: &quot;, availabilityMap(), &quot;\n&quot;);
-            if (!m_availableRecoveries.isEmpty())
-                dataLog(&quot;        Available recoveries: &quot;, listDump(m_availableRecoveries), &quot;\n&quot;);
-        }
-
-        DFG_ASSERT(m_graph, m_node, origin.exitOK);
-        
-        if (doOSRExitFuzzing() &amp;&amp; !isExceptionHandler) {
-            LValue numberOfFuzzChecks = m_out.add(
-                m_out.load32(m_out.absolute(&amp;g_numberOfOSRExitFuzzChecks)),
-                m_out.int32One);
-            
-            m_out.store32(numberOfFuzzChecks, m_out.absolute(&amp;g_numberOfOSRExitFuzzChecks));
-            
-            if (unsigned atOrAfter = Options::fireOSRExitFuzzAtOrAfter()) {
-                failCondition = m_out.bitOr(
-                    failCondition,
-                    m_out.aboveOrEqual(numberOfFuzzChecks, m_out.constInt32(atOrAfter)));
-            }
-            if (unsigned at = Options::fireOSRExitFuzzAt()) {
-                failCondition = m_out.bitOr(
-                    failCondition,
-                    m_out.equal(numberOfFuzzChecks, m_out.constInt32(at)));
-            }
-        }
-
-        if (failCondition == m_out.booleanFalse)
-            return;
-
-        blessSpeculation(
-            m_out.speculate(failCondition), kind, lowValue, highValue, origin);
-    }
-
-    void blessSpeculation(CheckValue* value, ExitKind kind, FormattedValue lowValue, Node* highValue, NodeOrigin origin)
-    {
-        OSRExitDescriptor* exitDescriptor = appendOSRExitDescriptor(lowValue, highValue);
-        
-        value-&gt;appendColdAnys(buildExitArguments(exitDescriptor, origin.forExit, lowValue));
-
-        State* state = &amp;m_ftlState;
-        value-&gt;setGenerator(
-            [=] (CCallHelpers&amp; jit, const B3::StackmapGenerationParams&amp; params) {
-                exitDescriptor-&gt;emitOSRExit(
-                    *state, kind, origin, jit, params, 0);
-            });
-    }
-
-    StackmapArgumentList buildExitArguments(
-        OSRExitDescriptor* exitDescriptor, CodeOrigin exitOrigin, FormattedValue lowValue,
-        unsigned offsetOfExitArgumentsInStackmapLocations = 0)
-    {
-        StackmapArgumentList result;
-        buildExitArguments(
-            exitDescriptor, exitOrigin, result, lowValue, offsetOfExitArgumentsInStackmapLocations);
-        return result;
-    }
-    
-    void buildExitArguments(
-        OSRExitDescriptor* exitDescriptor, CodeOrigin exitOrigin, StackmapArgumentList&amp; arguments, FormattedValue lowValue,
-        unsigned offsetOfExitArgumentsInStackmapLocations = 0)
-    {
-        if (!!lowValue)
-            arguments.append(lowValue.value());
-        
-        AvailabilityMap availabilityMap = this-&gt;availabilityMap();
-        availabilityMap.pruneByLiveness(m_graph, exitOrigin);
-        
-        HashMap&lt;Node*, ExitTimeObjectMaterialization*&gt; map;
-        availabilityMap.forEachAvailability(
-            [&amp;] (Availability availability) {
-                if (!availability.shouldUseNode())
-                    return;
-                
-                Node* node = availability.node();
-                if (!node-&gt;isPhantomAllocation())
-                    return;
-                
-                auto result = map.add(node, nullptr);
-                if (result.isNewEntry) {
-                    result.iterator-&gt;value =
-                        exitDescriptor-&gt;m_materializations.add(node-&gt;op(), node-&gt;origin.semantic);
-                }
-            });
-        
-        for (unsigned i = 0; i &lt; exitDescriptor-&gt;m_values.size(); ++i) {
-            int operand = exitDescriptor-&gt;m_values.operandForIndex(i);
-            
-            Availability availability = availabilityMap.m_locals[i];
-            
-            if (Options::validateFTLOSRExitLiveness()) {
-                DFG_ASSERT(
-                    m_graph, m_node,
-                    (!(availability.isDead() &amp;&amp; m_graph.isLiveInBytecode(VirtualRegister(operand), exitOrigin))) || m_graph.m_plan.mode == FTLForOSREntryMode);
-            }
-            ExitValue exitValue = exitValueForAvailability(arguments, map, availability);
-            if (exitValue.hasIndexInStackmapLocations())
-                exitValue.adjustStackmapLocationsIndexByOffset(offsetOfExitArgumentsInStackmapLocations);
-            exitDescriptor-&gt;m_values[i] = exitValue;
-        }
-        
-        for (auto heapPair : availabilityMap.m_heap) {
-            Node* node = heapPair.key.base();
-            ExitTimeObjectMaterialization* materialization = map.get(node);
-            ExitValue exitValue = exitValueForAvailability(arguments, map, heapPair.value);
-            if (exitValue.hasIndexInStackmapLocations())
-                exitValue.adjustStackmapLocationsIndexByOffset(offsetOfExitArgumentsInStackmapLocations);
-            materialization-&gt;add(
-                heapPair.key.descriptor(),
-                exitValue);
-        }
-        
-        if (verboseCompilationEnabled()) {
-            dataLog(&quot;        Exit values: &quot;, exitDescriptor-&gt;m_values, &quot;\n&quot;);
-            if (!exitDescriptor-&gt;m_materializations.isEmpty()) {
-                dataLog(&quot;        Materializations: \n&quot;);
-                for (ExitTimeObjectMaterialization* materialization : exitDescriptor-&gt;m_materializations)
-                    dataLog(&quot;            &quot;, pointerDump(materialization), &quot;\n&quot;);
-            }
-        }
-    }
-
-    ExitValue exitValueForAvailability(
-        StackmapArgumentList&amp; arguments, const HashMap&lt;Node*, ExitTimeObjectMaterialization*&gt;&amp; map,
-        Availability availability)
-    {
-        FlushedAt flush = availability.flushedAt();
-        switch (flush.format()) {
-        case DeadFlush:
-        case ConflictingFlush:
-            if (availability.hasNode())
-                return exitValueForNode(arguments, map, availability.node());
-            
-            // This means that the value is dead. It could be dead in bytecode or it could have
-            // been killed by our DCE, which can sometimes kill things even if they were live in
-            // bytecode.
-            return ExitValue::dead();
-
-        case FlushedJSValue:
-        case FlushedCell:
-        case FlushedBoolean:
-            return ExitValue::inJSStack(flush.virtualRegister());
-                
-        case FlushedInt32:
-            return ExitValue::inJSStackAsInt32(flush.virtualRegister());
-                
-        case FlushedInt52:
-            return ExitValue::inJSStackAsInt52(flush.virtualRegister());
-                
-        case FlushedDouble:
-            return ExitValue::inJSStackAsDouble(flush.virtualRegister());
-        }
-        
-        DFG_CRASH(m_graph, m_node, &quot;Invalid flush format&quot;);
-        return ExitValue::dead();
-    }
-    
-    ExitValue exitValueForNode(
-        StackmapArgumentList&amp; arguments, const HashMap&lt;Node*, ExitTimeObjectMaterialization*&gt;&amp; map,
-        Node* node)
-    {
-        // NOTE: In FTL-&gt;B3, we cannot generate code here, because m_output is positioned after the
-        // stackmap value. Like all values, the stackmap value cannot use a child that is defined after
-        // it.
-        
-        ASSERT(node-&gt;shouldGenerate());
-        ASSERT(node-&gt;hasResult());
-
-        if (node) {
-            switch (node-&gt;op()) {
-            case BottomValue:
-                // This might arise in object materializations. I actually doubt that it would,
-                // but it seems worthwhile to be conservative.
-                return ExitValue::dead();
-                
-            case JSConstant:
-            case Int52Constant:
-            case DoubleConstant:
-                return ExitValue::constant(node-&gt;asJSValue());
-                
-            default:
-                if (node-&gt;isPhantomAllocation())
-                    return ExitValue::materializeNewObject(map.get(node));
-                break;
-            }
-        }
-        
-        for (unsigned i = 0; i &lt; m_availableRecoveries.size(); ++i) {
-            AvailableRecovery recovery = m_availableRecoveries[i];
-            if (recovery.node() != node)
-                continue;
-            ExitValue result = ExitValue::recovery(
-                recovery.opcode(), arguments.size(), arguments.size() + 1,
-                recovery.format());
-            arguments.append(recovery.left());
-            arguments.append(recovery.right());
-            return result;
-        }
-        
-        LoweredNodeValue value = m_int32Values.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatInt32, value.value());
-        
-        value = m_int52Values.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatInt52, value.value());
-        
-        value = m_strictInt52Values.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatStrictInt52, value.value());
-        
-        value = m_booleanValues.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatBoolean, value.value());
-        
-        value = m_jsValueValues.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatJS, value.value());
-        
-        value = m_doubleValues.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatDouble, value.value());
-
-        DFG_CRASH(m_graph, m_node, toCString(&quot;Cannot find value for node: &quot;, node).data());
-        return ExitValue::dead();
-    }
-
-    ExitValue exitArgument(StackmapArgumentList&amp; arguments, DataFormat format, LValue value)
-    {
-        ExitValue result = ExitValue::exitArgument(ExitArgument(format, arguments.size()));
-        arguments.append(value);
-        return result;
-    }
-
-    ExitValue exitValueForTailCall(StackmapArgumentList&amp; arguments, Node* node)
-    {
-        ASSERT(node-&gt;shouldGenerate());
-        ASSERT(node-&gt;hasResult());
-
-        switch (node-&gt;op()) {
-        case JSConstant:
-        case Int52Constant:
-        case DoubleConstant:
-            return ExitValue::constant(node-&gt;asJSValue());
-
-        default:
-            break;
-        }
-
-        LoweredNodeValue value = m_jsValueValues.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatJS, value.value());
-
-        value = m_int32Values.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatJS, boxInt32(value.value()));
-
-        value = m_booleanValues.get(node);
-        if (isValid(value))
-            return exitArgument(arguments, DataFormatJS, boxBoolean(value.value()));
-
-        // Doubles and Int52 have been converted by ValueRep()
-        DFG_CRASH(m_graph, m_node, toCString(&quot;Cannot find value for node: &quot;, node).data());
-    }
-    
-    bool doesKill(Edge edge)
-    {
-        if (edge.doesNotKill())
-            return false;
-        
-        if (edge-&gt;hasConstant())
-            return false;
-        
-        return true;
-    }
-
-    void addAvailableRecovery(
-        Node* node, RecoveryOpcode opcode, LValue left, LValue right, DataFormat format)
-    {
-        m_availableRecoveries.append(AvailableRecovery(node, opcode, left, right, format));
-    }
-    
-    void addAvailableRecovery(
-        Edge edge, RecoveryOpcode opcode, LValue left, LValue right, DataFormat format)
-    {
-        addAvailableRecovery(edge.node(), opcode, left, right, format);
-    }
-    
-    void setInt32(Node* node, LValue value)
-    {
-        m_int32Values.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-    void setInt52(Node* node, LValue value)
-    {
-        m_int52Values.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-    void setStrictInt52(Node* node, LValue value)
-    {
-        m_strictInt52Values.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-    void setInt52(Node* node, LValue value, Int52Kind kind)
-    {
-        switch (kind) {
-        case Int52:
-            setInt52(node, value);
-            return;
-            
-        case StrictInt52:
-            setStrictInt52(node, value);
-            return;
-        }
-        
-        DFG_CRASH(m_graph, m_node, &quot;Corrupt int52 kind&quot;);
-    }
-    void setJSValue(Node* node, LValue value)
-    {
-        m_jsValueValues.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-    void setBoolean(Node* node, LValue value)
-    {
-        m_booleanValues.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-    void setStorage(Node* node, LValue value)
-    {
-        m_storageValues.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-    void setDouble(Node* node, LValue value)
-    {
-        m_doubleValues.set(node, LoweredNodeValue(value, m_highBlock));
-    }
-
-    void setInt32(LValue value)
-    {
-        setInt32(m_node, value);
-    }
-    void setInt52(LValue value)
-    {
-        setInt52(m_node, value);
-    }
-    void setStrictInt52(LValue value)
-    {
-        setStrictInt52(m_node, value);
-    }
-    void setInt52(LValue value, Int52Kind kind)
-    {
-        setInt52(m_node, value, kind);
-    }
-    void setJSValue(LValue value)
-    {
-        setJSValue(m_node, value);
-    }
-    void setBoolean(LValue value)
-    {
-        setBoolean(m_node, value);
-    }
-    void setStorage(LValue value)
-    {
-        setStorage(m_node, value);
-    }
-    void setDouble(LValue value)
-    {
-        setDouble(m_node, value);
-    }
-    
-    bool isValid(const LoweredNodeValue&amp; value)
-    {
-        if (!value)
-            return false;
-        if (!m_graph.m_dominators-&gt;dominates(value.block(), m_highBlock))
-            return false;
-        return true;
-    }
-    
-    void addWeakReference(JSCell* target)
-    {
-        m_graph.m_plan.weakReferences.addLazily(target);
-    }
-
-    LValue loadStructure(LValue value)
-    {
-        LValue tableIndex = m_out.load32(value, m_heaps.JSCell_structureID);
-        LValue tableBase = m_out.loadPtr(
-            m_out.absolute(vm().heap.structureIDTable().base()));
-        TypedPointer address = m_out.baseIndex(
-            m_heaps.structureTable, tableBase, m_out.zeroExtPtr(tableIndex));
-        return m_out.loadPtr(address);
-    }
-
-    LValue weakPointer(JSCell* pointer)
-    {
-        addWeakReference(pointer);
-        return m_out.constIntPtr(pointer);
-    }
-
-    LValue weakStructureID(Structure* structure)
-    {
-        addWeakReference(structure);
-        return m_out.constInt32(structure-&gt;id());
-    }
-    
-    LValue weakStructure(Structure* structure)
-    {
-        return weakPointer(structure);
-    }
-    
-    TypedPointer addressFor(LValue base, int operand, ptrdiff_t offset = 0)
-    {
-        return m_out.address(base, m_heaps.variables[operand], offset);
-    }
-    TypedPointer payloadFor(LValue base, int operand)
-    {
-        return addressFor(base, operand, PayloadOffset);
-    }
-    TypedPointer tagFor(LValue base, int operand)
-    {
-        return addressFor(base, operand, TagOffset);
-    }
-    TypedPointer addressFor(int operand, ptrdiff_t offset = 0)
-    {
-        return addressFor(VirtualRegister(operand), offset);
-    }
-    TypedPointer addressFor(VirtualRegister operand, ptrdiff_t offset = 0)
-    {
-        if (operand.isLocal())
-            return addressFor(m_captured, operand.offset(), offset);
-        return addressFor(m_callFrame, operand.offset(), offset);
-    }
-    TypedPointer payloadFor(int operand)
-    {
-        return payloadFor(VirtualRegister(operand));
-    }
-    TypedPointer payloadFor(VirtualRegister operand)
-    {
-        return addressFor(operand, PayloadOffset);
-    }
-    TypedPointer tagFor(int operand)
-    {
-        return tagFor(VirtualRegister(operand));
-    }
-    TypedPointer tagFor(VirtualRegister operand)
-    {
-        return addressFor(operand, TagOffset);
-    }
-    
-    AbstractValue abstractValue(Node* node)
-    {
-        return m_state.forNode(node);
-    }
-    AbstractValue abstractValue(Edge edge)
-    {
-        return abstractValue(edge.node());
-    }
-    
-    SpeculatedType provenType(Node* node)
-    {
-        return abstractValue(node).m_type;
-    }
-    SpeculatedType provenType(Edge edge)
-    {
-        return provenType(edge.node());
-    }
-    
-    JSValue provenValue(Node* node)
-    {
-        return abstractValue(node).m_value;
-    }
-    JSValue provenValue(Edge edge)
-    {
-        return provenValue(edge.node());
-    }
-    
-    StructureAbstractValue abstractStructure(Node* node)
-    {
-        return abstractValue(node).m_structure;
-    }
-    StructureAbstractValue abstractStructure(Edge edge)
-    {
-        return abstractStructure(edge.node());
-    }
-    
-#if ENABLE(MASM_PROBE)
-    void probe(std::function&lt;void (CCallHelpers::ProbeContext*)&gt; probeFunc)
-    {
-        UNUSED_PARAM(probeFunc);
-    }
-#endif
-
-    void crash()
-    {
-        crash(m_highBlock-&gt;index, m_node-&gt;index());
-    }
-    void crash(BlockIndex blockIndex, unsigned nodeIndex)
-    {
-#if ASSERT_DISABLED
-        m_out.call(m_out.voidType, m_out.operation(ftlUnreachable));
-        UNUSED_PARAM(blockIndex);
-        UNUSED_PARAM(nodeIndex);
-#else
-        m_out.call(
-            m_out.voidType,
-            m_out.constIntPtr(ftlUnreachable),
-            m_out.constIntPtr(codeBlock()), m_out.constInt32(blockIndex),
-            m_out.constInt32(nodeIndex));
-#endif
-        m_out.unreachable();
-    }
-
-    AvailabilityMap&amp; availabilityMap() { return m_availabilityCalculator.m_availability; }
-    
-    VM&amp; vm() { return m_graph.m_vm; }
-    CodeBlock* codeBlock() { return m_graph.m_codeBlock; }
-    
-    Graph&amp; m_graph;
-    State&amp; m_ftlState;
-    AbstractHeapRepository m_heaps;
-    Output m_out;
-    Procedure&amp; m_proc;
-    
-    LBasicBlock m_prologue;
-    LBasicBlock m_handleExceptions;
-    HashMap&lt;DFG::BasicBlock*, LBasicBlock&gt; m_blocks;
-    
-    LValue m_callFrame;
-    LValue m_captured;
-    LValue m_tagTypeNumber;
-    LValue m_tagMask;
-    
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_int32Values;
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_strictInt52Values;
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_int52Values;
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_jsValueValues;
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_booleanValues;
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_storageValues;
-    HashMap&lt;Node*, LoweredNodeValue&gt; m_doubleValues;
-    
-    // This is a bit of a hack. It prevents LLVM from having to do CSE on loading of arguments.
-    // It's nice to have these optimizations on our end because we can guarantee them a bit better.
-    // Probably also saves LLVM compile time.
-    HashMap&lt;Node*, LValue&gt; m_loadedArgumentValues;
-    
-    HashMap&lt;Node*, LValue&gt; m_phis;
-    
-    LocalOSRAvailabilityCalculator m_availabilityCalculator;
-    
-    Vector&lt;AvailableRecovery, 3&gt; m_availableRecoveries;
-    
-    InPlaceAbstractState m_state;
-    AbstractInterpreter&lt;InPlaceAbstractState&gt; m_interpreter;
-    DFG::BasicBlock* m_highBlock;
-    DFG::BasicBlock* m_nextHighBlock;
-    LBasicBlock m_nextLowBlock;
-
-    NodeOrigin m_origin;
-    unsigned m_nodeIndex;
-    Node* m_node;
-};
-
-} // anonymous namespace
-
-void lowerDFGToLLVM(State&amp; state)
-{
-    LowerDFGToLLVM lowering(state);
-    lowering.lower();
-}
-
-} } // namespace JSC::FTL
-
-#endif // ENABLE(FTL_JIT)
-
</del></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMh"></a>
<div class="delfile"><h4>Deleted: trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -1,43 +0,0 @@
</span><del>-/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef FTLLowerDFGToLLVM_h
-#define FTLLowerDFGToLLVM_h
-
-#if ENABLE(FTL_JIT)
-
-#include &quot;DFGGraph.h&quot;
-#include &quot;FTLState.h&quot;
-
-namespace JSC { namespace FTL {
-
-void lowerDFGToLLVM(State&amp;);
-
-} } // namespace JSC::FTL
-
-#endif // ENABLE(FTL_JIT)
-
-#endif // FTLLowerDFGToLLVM_h
-
</del></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLOSRExitCompilercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -238,7 +238,7 @@
</span><span class="cx">             registerScratch, materializationToPointer);
</span><span class="cx">     };
</span><span class="cx">     
</span><del>-    // Note that we come in here, the stack used to be as LLVM left it except that someone called pushToSave().
</del><ins>+    // Note that we come in here, the stack used to be as B3 left it except that someone called pushToSave().
</ins><span class="cx">     // We don't care about the value they saved. But, we do appreciate the fact that they did it, because we use
</span><span class="cx">     // that slot for saveAllRegisters().
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLWeighth"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLWeight.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLWeight.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLWeight.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -53,18 +53,6 @@
</span><span class="cx"> 
</span><span class="cx">     B3::FrequencyClass frequencyClass() const { return value() ? B3::FrequencyClass::Normal : B3::FrequencyClass::Rare; }
</span><span class="cx">     
</span><del>-    unsigned scaleToTotal(double total) const
-    {
-        // LLVM accepts 32-bit unsigned branch weights but in dumps it might display them
-        // as signed values. We don't need all 32 bits, so we just use the 31 bits.
-        double result = static_cast&lt;double&gt;(m_value) * INT_MAX / total;
-        if (result &lt; 0)
-            return 0;
-        if (result &gt; INT_MAX)
-            return INT_MAX;
-        return static_cast&lt;unsigned&gt;(result);
-    }
-    
</del><span class="cx">     // Inverse weight for a two-target branch.
</span><span class="cx">     Weight inverse() const
</span><span class="cx">     {
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLWeightedTargeth"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLWeightedTarget.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLWeightedTarget.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/ftl/FTLWeightedTarget.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -73,9 +73,8 @@
</span><span class="cx">     return WeightedTarget(block, 0);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-// This means we let LLVM figure it out basic on its static estimates. LLVM's static
-// estimates are usually pretty darn good, so there's generally nothing wrong with
-// using this.
</del><ins>+// Currently in B3 this is the equivalent of &quot;usually&quot;, but we like to make the distinction in
+// case we ever make B3 support proper branch weights. We used to do that in LLVM.
</ins><span class="cx"> inline WeightedTarget unsure(LBasicBlock block)
</span><span class="cx"> {
</span><span class="cx">     return WeightedTarget(block, Weight());
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitCallFrameShuffler64cpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/CallFrameShuffler64.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/CallFrameShuffler64.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/jit/CallFrameShuffler64.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -284,7 +284,7 @@
</span><span class="cx">         // absolutely need to load the callee-save registers into
</span><span class="cx">         // different GPRs initially but not enough pressure to
</span><span class="cx">         // then have to spill all of them. And even in that case,
</span><del>-        // depending on the order in which LLVM saves the
</del><ins>+        // depending on the order in which B3 saves the
</ins><span class="cx">         // callee-saves, we will probably still be safe. Anyway,
</span><span class="cx">         // the couple extra move instructions compared to an
</span><span class="cx">         // efficient cycle-based algorithm are not going to hurt
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitRegisterSetcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/RegisterSet.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/RegisterSet.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/jit/RegisterSet.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -278,7 +278,7 @@
</span><span class="cx">     result.set(GPRInfo::regCS3);
</span><span class="cx">     result.set(GPRInfo::regCS4);
</span><span class="cx"> #elif CPU(ARM64)
</span><del>-    // LLVM might save and use all ARM64 callee saves specified in the ABI.
</del><ins>+    // B3 might save and use all ARM64 callee saves specified in the ABI.
</ins><span class="cx">     result.set(GPRInfo::regCS0);
</span><span class="cx">     result.set(GPRInfo::regCS1);
</span><span class="cx">     result.set(GPRInfo::regCS2);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreruntimeOptionscpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/runtime/Options.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/runtime/Options.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/runtime/Options.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -340,16 +340,6 @@
</span><span class="cx">         Options::useOSREntryToFTL() = false;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-#if CPU(ARM64)
-    // FIXME: https://bugs.webkit.org/show_bug.cgi?id=152510
-    // We're running into a bug where some ARM64 tests are failing in the FTL
-    // with what appears to be an llvm bug where llvm is miscalculating the
-    // live-out variables of a patchpoint. This causes us to not keep a 
-    // volatile register alive across a C call in a patchpoint, even though 
-    // that register is used immediately after the patchpoint.
-    Options::assumeAllRegsInFTLICAreLive() = true;
-#endif
-
</del><span class="cx">     // Compute the maximum value of the reoptimization retry counter. This is simply
</span><span class="cx">     // the largest value at which we don't overflow the execute counter, when using it
</span><span class="cx">     // to left-shift the execution counter by this amount. Currently the value ends
</span><span class="lines">@@ -376,12 +366,7 @@
</span><span class="cx">             name_##Default() = defaultValue_;
</span><span class="cx">             JSC_OPTIONS(FOR_EACH_OPTION)
</span><span class="cx"> #undef FOR_EACH_OPTION
</span><del>-    
-                // It *probably* makes sense for other platforms to enable this.
-#if PLATFORM(IOS) &amp;&amp; CPU(ARM64)
-                useLLVMFastISel() = true;
-#endif
-        
</del><ins>+                
</ins><span class="cx">             // Allow environment vars to override options if applicable.
</span><span class="cx">             // The evn var should be the name of the option prefixed with
</span><span class="cx">             // &quot;JSC_&quot;.
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreruntimeOptionsh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/runtime/Options.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/runtime/Options.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/runtime/Options.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -180,22 +180,11 @@
</span><span class="cx">     \
</span><span class="cx">     v(bool, useFTLJIT, true, &quot;allows the FTL JIT to be used if true&quot;) \
</span><span class="cx">     v(bool, useFTLTBAA, true, nullptr) \
</span><del>-    v(bool, useLLVMFastISel, false, nullptr) \
-    v(bool, useLLVMSmallCodeModel, false, nullptr) \
-    v(bool, dumpLLVMIR, false, nullptr) \
</del><span class="cx">     v(bool, validateFTLOSRExitLiveness, false, nullptr) \
</span><del>-    v(bool, llvmAlwaysFailsBeforeCompile, false, nullptr) \
-    v(bool, llvmAlwaysFailsBeforeLink, false, nullptr) \
-    v(bool, llvmSimpleOpt, true, nullptr) \
-    v(unsigned, llvmBackendOptimizationLevel, 2, nullptr) \
-    v(unsigned, llvmOptimizationLevel, 2, nullptr) \
-    v(unsigned, llvmSizeLevel, 0, nullptr) \
-    v(unsigned, llvmMaxStackSize, 128 * KB, nullptr) \
-    v(bool, llvmDisallowAVX, true, nullptr) \
</del><ins>+    v(bool, b3AlwaysFailsBeforeCompile, false, nullptr) \
+    v(bool, b3AlwaysFailsBeforeLink, false, nullptr) \
</ins><span class="cx">     v(bool, ftlCrashes, false, nullptr) /* fool-proof way of checking that you ended up in the FTL. ;-) */\
</span><del>-    v(bool, ftlCrashesIfCantInitializeLLVM, false, nullptr) \
</del><span class="cx">     v(bool, clobberAllRegsInFTLICSlowPath, !ASSERT_DISABLED, nullptr) \
</span><del>-    v(bool, assumeAllRegsInFTLICAreLive, false, nullptr) \
</del><span class="cx">     v(bool, useAccessInlining, true, nullptr) \
</span><span class="cx">     v(unsigned, maxAccessVariantListSize, 8, nullptr) \
</span><span class="cx">     v(bool, usePolyvariantDevirtualization, true, nullptr) \
</span><span class="lines">@@ -220,9 +209,6 @@
</span><span class="cx">     \
</span><span class="cx">     v(bool, useProfiler, false, nullptr) \
</span><span class="cx">     \
</span><del>-    v(bool, forceUDis86Disassembler, false, nullptr) \
-    v(bool, forceLLVMDisassembler, false, nullptr) \
-    \
</del><span class="cx">     v(bool, useArchitectureSpecificOptimizations, true, nullptr) \
</span><span class="cx">     \
</span><span class="cx">     v(bool, breakOnThrow, false, nullptr) \
</span><span class="lines">@@ -378,7 +364,6 @@
</span><span class="cx">     v(alwaysDoFullCollection, useGenerationalGC, InvertedOption) \
</span><span class="cx">     v(enableOSREntryToDFG, useOSREntryToDFG, SameOption) \
</span><span class="cx">     v(enableOSREntryToFTL, useOSREntryToFTL, SameOption) \
</span><del>-    v(enableLLVMFastISel, useLLVMFastISel, SameOption) \
</del><span class="cx">     v(enableAccessInlining, useAccessInlining, SameOption) \
</span><span class="cx">     v(enablePolyvariantDevirtualization, usePolyvariantDevirtualization, SameOption) \
</span><span class="cx">     v(enablePolymorphicAccessInlining, usePolymorphicAccessInlining, SameOption) \
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorewasmWASMFunctionB3IRGeneratorhfromrev196731trunkSourceJavaScriptCorewasmWASMFunctionLLVMIRGeneratorh"></a>
<div class="copfile"><h4>Copied: trunk/Source/JavaScriptCore/wasm/WASMFunctionB3IRGenerator.h (from rev 196731, trunk/Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h) (0 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/wasm/WASMFunctionB3IRGenerator.h                                (rev 0)
+++ trunk/Source/JavaScriptCore/wasm/WASMFunctionB3IRGenerator.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -0,0 +1,394 @@
</span><ins>+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef WASMFunctionB3IRGenerator_h
+#define WASMFunctionB3IRGenerator_h
+
+#if ENABLE(WEBASSEMBLY) &amp;&amp; ENABLE(FTL_JIT)
+
+#include &quot;FTLAbbreviatedTypes.h&quot;
+
+#define UNUSED 0
+
+namespace JSC {
+
+using FTL::LBasicBlock;
+using FTL::LValue;
+
+class WASMFunctionB3IRGenerator {
+public:
+    typedef LValue Expression;
+    typedef int Statement;
+    typedef Vector&lt;LValue&gt; ExpressionList;
+    struct MemoryAddress {
+        MemoryAddress(void*) { }
+        MemoryAddress(LValue index, uint32_t offset)
+            : index(index)
+            , offset(offset)
+        {
+        }
+        LValue index;
+        uint32_t offset;
+    };
+    typedef LBasicBlock JumpTarget;
+    enum class JumpCondition { Zero, NonZero };
+
+    void startFunction(const Vector&lt;WASMType&gt;&amp; arguments, uint32_t numberOfI32LocalVariables, uint32_t numberOfF32LocalVariables, uint32_t numberOfF64LocalVariables)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(arguments);
+        UNUSED_PARAM(numberOfI32LocalVariables);
+        UNUSED_PARAM(numberOfF32LocalVariables);
+        UNUSED_PARAM(numberOfF64LocalVariables);
+    }
+
+    void endFunction()
+    {
+        // FIXME: Implement this method.
+    }
+
+    LValue buildSetLocal(WASMOpKind opKind, uint32_t localIndex, LValue value, WASMType type)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(opKind);
+        UNUSED_PARAM(localIndex);
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(type);
+        return UNUSED;
+    }
+
+    LValue buildSetGlobal(WASMOpKind opKind, uint32_t globalIndex, LValue value, WASMType type)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(opKind);
+        UNUSED_PARAM(globalIndex);
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(type);
+        return UNUSED;
+    }
+
+    void buildReturn(LValue value, WASMExpressionType returnType)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(returnType);
+    }
+
+    LValue buildImmediateI32(uint32_t immediate)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(immediate);
+        return UNUSED;
+    }
+
+    LValue buildImmediateF32(float immediate)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(immediate);
+        return UNUSED;
+    }
+
+    LValue buildImmediateF64(double immediate)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(immediate);
+        return UNUSED;
+    }
+
+    LValue buildGetLocal(uint32_t localIndex, WASMType type)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(localIndex);
+        UNUSED_PARAM(type);
+        return UNUSED;
+    }
+
+    LValue buildGetGlobal(uint32_t globalIndex, WASMType type)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(globalIndex);
+        UNUSED_PARAM(type);
+        return UNUSED;
+    }
+
+    LValue buildConvertType(LValue value, WASMExpressionType fromType, WASMExpressionType toType, WASMTypeConversion conversion)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(fromType);
+        UNUSED_PARAM(toType);
+        UNUSED_PARAM(conversion);
+        return UNUSED;
+    }
+
+    LValue buildLoad(const MemoryAddress&amp; memoryAddress, WASMExpressionType expressionType, WASMMemoryType memoryType, MemoryAccessConversion conversion)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(memoryAddress);
+        UNUSED_PARAM(expressionType);
+        UNUSED_PARAM(memoryType);
+        UNUSED_PARAM(conversion);
+        return UNUSED;
+    }
+
+    LValue buildStore(WASMOpKind opKind, const MemoryAddress&amp; memoryAddress, WASMExpressionType expressionType, WASMMemoryType memoryType, LValue value)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(opKind);
+        UNUSED_PARAM(memoryAddress);
+        UNUSED_PARAM(expressionType);
+        UNUSED_PARAM(memoryType);
+        UNUSED_PARAM(value);
+        return UNUSED;
+    }
+
+    LValue buildUnaryI32(LValue value, WASMOpExpressionI32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildUnaryF32(LValue value, WASMOpExpressionF32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildUnaryF64(LValue value, WASMOpExpressionF64 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildBinaryI32(LValue left, LValue right, WASMOpExpressionI32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildBinaryF32(LValue left, LValue right, WASMOpExpressionF32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildBinaryF64(LValue left, LValue right, WASMOpExpressionF64 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildRelationalI32(LValue left, LValue right, WASMOpExpressionI32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildRelationalF32(LValue left, LValue right, WASMOpExpressionI32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildRelationalF64(LValue left, LValue right, WASMOpExpressionI32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildMinOrMaxI32(LValue left, LValue right, WASMOpExpressionI32 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildMinOrMaxF64(LValue left, LValue right, WASMOpExpressionF64 op)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(left);
+        UNUSED_PARAM(right);
+        UNUSED_PARAM(op);
+        return UNUSED;
+    }
+
+    LValue buildCallInternal(uint32_t functionIndex, const Vector&lt;LValue&gt;&amp; argumentList, const WASMSignature&amp; signature, WASMExpressionType returnType)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(functionIndex);
+        UNUSED_PARAM(argumentList);
+        UNUSED_PARAM(signature);
+        UNUSED_PARAM(returnType);
+        return UNUSED;
+    }
+
+    LValue buildCallIndirect(uint32_t functionPointerTableIndex, LValue index, const Vector&lt;LValue&gt;&amp; argumentList, const WASMSignature&amp; signature, WASMExpressionType returnType)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(functionPointerTableIndex);
+        UNUSED_PARAM(index);
+        UNUSED_PARAM(argumentList);
+        UNUSED_PARAM(signature);
+        UNUSED_PARAM(returnType);
+        return UNUSED;
+    }
+
+    LValue buildCallImport(uint32_t functionImportIndex, const Vector&lt;LValue&gt;&amp; argumentList, const WASMSignature&amp; signature, WASMExpressionType returnType)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(functionImportIndex);
+        UNUSED_PARAM(argumentList);
+        UNUSED_PARAM(signature);
+        UNUSED_PARAM(returnType);
+        return UNUSED;
+    }
+
+    void appendExpressionList(Vector&lt;LValue&gt;&amp; expressionList, LValue value)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(expressionList);
+        UNUSED_PARAM(value);
+    }
+
+    void discard(LValue)
+    {
+    }
+
+    void linkTarget(LBasicBlock target)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(target);
+    }
+
+    void jumpToTarget(LBasicBlock target)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(target);
+    }
+
+    void jumpToTargetIf(JumpCondition, LValue value, LBasicBlock target)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(target);
+    }
+
+    void startLoop()
+    {
+        // FIXME: Implement this method.
+    }
+
+    void endLoop()
+    {
+        // FIXME: Implement this method.
+    }
+
+    void startSwitch()
+    {
+        // FIXME: Implement this method.
+    }
+
+    void endSwitch()
+    {
+        // FIXME: Implement this method.
+    }
+
+    void startLabel()
+    {
+        // FIXME: Implement this method.
+    }
+
+    void endLabel()
+    {
+        // FIXME: Implement this method.
+    }
+
+    LBasicBlock breakTarget()
+    {
+        // FIXME: Implement this method.
+        return UNUSED;
+    }
+
+    LBasicBlock continueTarget()
+    {
+        // FIXME: Implement this method.
+        return UNUSED;
+    }
+
+    LBasicBlock breakLabelTarget(uint32_t labelIndex)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(labelIndex);
+        return UNUSED;
+    }
+
+    LBasicBlock continueLabelTarget(uint32_t labelIndex)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(labelIndex);
+        return UNUSED;
+    }
+
+    void buildSwitch(LValue value, const Vector&lt;int64_t&gt;&amp; cases, const Vector&lt;LBasicBlock&gt;&amp; targets, LBasicBlock defaultTarget)
+    {
+        // FIXME: Implement this method.
+        UNUSED_PARAM(value);
+        UNUSED_PARAM(cases);
+        UNUSED_PARAM(targets);
+        UNUSED_PARAM(defaultTarget);
+    }
+};
+
+} // namespace JSC
+
+#endif // ENABLE(WEBASSEMBLY)
+
+#endif // WASMFunctionB3IRGenerator_h
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCorewasmWASMFunctionLLVMIRGeneratorh"></a>
<div class="delfile"><h4>Deleted: trunk/Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/wasm/WASMFunctionLLVMIRGenerator.h        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -1,394 +0,0 @@
</span><del>-/*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef WASMFunctionLLVMIRGenerator_h
-#define WASMFunctionLLVMIRGenerator_h
-
-#if ENABLE(WEBASSEMBLY) &amp;&amp; ENABLE(FTL_JIT)
-
-#include &quot;FTLAbbreviatedTypes.h&quot;
-
-#define UNUSED 0
-
-namespace JSC {
-
-using FTL::LBasicBlock;
-using FTL::LValue;
-
-class WASMFunctionLLVMIRGenerator {
-public:
-    typedef LValue Expression;
-    typedef int Statement;
-    typedef Vector&lt;LValue&gt; ExpressionList;
-    struct MemoryAddress {
-        MemoryAddress(void*) { }
-        MemoryAddress(LValue index, uint32_t offset)
-            : index(index)
-            , offset(offset)
-        {
-        }
-        LValue index;
-        uint32_t offset;
-    };
-    typedef LBasicBlock JumpTarget;
-    enum class JumpCondition { Zero, NonZero };
-
-    void startFunction(const Vector&lt;WASMType&gt;&amp; arguments, uint32_t numberOfI32LocalVariables, uint32_t numberOfF32LocalVariables, uint32_t numberOfF64LocalVariables)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(arguments);
-        UNUSED_PARAM(numberOfI32LocalVariables);
-        UNUSED_PARAM(numberOfF32LocalVariables);
-        UNUSED_PARAM(numberOfF64LocalVariables);
-    }
-
-    void endFunction()
-    {
-        // FIXME: Implement this method.
-    }
-
-    LValue buildSetLocal(WASMOpKind opKind, uint32_t localIndex, LValue value, WASMType type)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(opKind);
-        UNUSED_PARAM(localIndex);
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(type);
-        return UNUSED;
-    }
-
-    LValue buildSetGlobal(WASMOpKind opKind, uint32_t globalIndex, LValue value, WASMType type)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(opKind);
-        UNUSED_PARAM(globalIndex);
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(type);
-        return UNUSED;
-    }
-
-    void buildReturn(LValue value, WASMExpressionType returnType)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(returnType);
-    }
-
-    LValue buildImmediateI32(uint32_t immediate)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(immediate);
-        return UNUSED;
-    }
-
-    LValue buildImmediateF32(float immediate)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(immediate);
-        return UNUSED;
-    }
-
-    LValue buildImmediateF64(double immediate)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(immediate);
-        return UNUSED;
-    }
-
-    LValue buildGetLocal(uint32_t localIndex, WASMType type)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(localIndex);
-        UNUSED_PARAM(type);
-        return UNUSED;
-    }
-
-    LValue buildGetGlobal(uint32_t globalIndex, WASMType type)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(globalIndex);
-        UNUSED_PARAM(type);
-        return UNUSED;
-    }
-
-    LValue buildConvertType(LValue value, WASMExpressionType fromType, WASMExpressionType toType, WASMTypeConversion conversion)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(fromType);
-        UNUSED_PARAM(toType);
-        UNUSED_PARAM(conversion);
-        return UNUSED;
-    }
-
-    LValue buildLoad(const MemoryAddress&amp; memoryAddress, WASMExpressionType expressionType, WASMMemoryType memoryType, MemoryAccessConversion conversion)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(memoryAddress);
-        UNUSED_PARAM(expressionType);
-        UNUSED_PARAM(memoryType);
-        UNUSED_PARAM(conversion);
-        return UNUSED;
-    }
-
-    LValue buildStore(WASMOpKind opKind, const MemoryAddress&amp; memoryAddress, WASMExpressionType expressionType, WASMMemoryType memoryType, LValue value)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(opKind);
-        UNUSED_PARAM(memoryAddress);
-        UNUSED_PARAM(expressionType);
-        UNUSED_PARAM(memoryType);
-        UNUSED_PARAM(value);
-        return UNUSED;
-    }
-
-    LValue buildUnaryI32(LValue value, WASMOpExpressionI32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildUnaryF32(LValue value, WASMOpExpressionF32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildUnaryF64(LValue value, WASMOpExpressionF64 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildBinaryI32(LValue left, LValue right, WASMOpExpressionI32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildBinaryF32(LValue left, LValue right, WASMOpExpressionF32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildBinaryF64(LValue left, LValue right, WASMOpExpressionF64 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildRelationalI32(LValue left, LValue right, WASMOpExpressionI32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildRelationalF32(LValue left, LValue right, WASMOpExpressionI32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildRelationalF64(LValue left, LValue right, WASMOpExpressionI32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildMinOrMaxI32(LValue left, LValue right, WASMOpExpressionI32 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildMinOrMaxF64(LValue left, LValue right, WASMOpExpressionF64 op)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(left);
-        UNUSED_PARAM(right);
-        UNUSED_PARAM(op);
-        return UNUSED;
-    }
-
-    LValue buildCallInternal(uint32_t functionIndex, const Vector&lt;LValue&gt;&amp; argumentList, const WASMSignature&amp; signature, WASMExpressionType returnType)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(functionIndex);
-        UNUSED_PARAM(argumentList);
-        UNUSED_PARAM(signature);
-        UNUSED_PARAM(returnType);
-        return UNUSED;
-    }
-
-    LValue buildCallIndirect(uint32_t functionPointerTableIndex, LValue index, const Vector&lt;LValue&gt;&amp; argumentList, const WASMSignature&amp; signature, WASMExpressionType returnType)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(functionPointerTableIndex);
-        UNUSED_PARAM(index);
-        UNUSED_PARAM(argumentList);
-        UNUSED_PARAM(signature);
-        UNUSED_PARAM(returnType);
-        return UNUSED;
-    }
-
-    LValue buildCallImport(uint32_t functionImportIndex, const Vector&lt;LValue&gt;&amp; argumentList, const WASMSignature&amp; signature, WASMExpressionType returnType)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(functionImportIndex);
-        UNUSED_PARAM(argumentList);
-        UNUSED_PARAM(signature);
-        UNUSED_PARAM(returnType);
-        return UNUSED;
-    }
-
-    void appendExpressionList(Vector&lt;LValue&gt;&amp; expressionList, LValue value)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(expressionList);
-        UNUSED_PARAM(value);
-    }
-
-    void discard(LValue)
-    {
-    }
-
-    void linkTarget(LBasicBlock target)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(target);
-    }
-
-    void jumpToTarget(LBasicBlock target)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(target);
-    }
-
-    void jumpToTargetIf(JumpCondition, LValue value, LBasicBlock target)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(target);
-    }
-
-    void startLoop()
-    {
-        // FIXME: Implement this method.
-    }
-
-    void endLoop()
-    {
-        // FIXME: Implement this method.
-    }
-
-    void startSwitch()
-    {
-        // FIXME: Implement this method.
-    }
-
-    void endSwitch()
-    {
-        // FIXME: Implement this method.
-    }
-
-    void startLabel()
-    {
-        // FIXME: Implement this method.
-    }
-
-    void endLabel()
-    {
-        // FIXME: Implement this method.
-    }
-
-    LBasicBlock breakTarget()
-    {
-        // FIXME: Implement this method.
-        return UNUSED;
-    }
-
-    LBasicBlock continueTarget()
-    {
-        // FIXME: Implement this method.
-        return UNUSED;
-    }
-
-    LBasicBlock breakLabelTarget(uint32_t labelIndex)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(labelIndex);
-        return UNUSED;
-    }
-
-    LBasicBlock continueLabelTarget(uint32_t labelIndex)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(labelIndex);
-        return UNUSED;
-    }
-
-    void buildSwitch(LValue value, const Vector&lt;int64_t&gt;&amp; cases, const Vector&lt;LBasicBlock&gt;&amp; targets, LBasicBlock defaultTarget)
-    {
-        // FIXME: Implement this method.
-        UNUSED_PARAM(value);
-        UNUSED_PARAM(cases);
-        UNUSED_PARAM(targets);
-        UNUSED_PARAM(defaultTarget);
-    }
-};
-
-} // namespace JSC
-
-#endif // ENABLE(WEBASSEMBLY)
-
-#endif // WASMFunctionLLVMIRGenerator_h
</del></span></pre></div>
<a id="trunkSourceJavaScriptCorewasmWASMFunctionParsercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/wasm/WASMFunctionParser.cpp (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/wasm/WASMFunctionParser.cpp        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Source/JavaScriptCore/wasm/WASMFunctionParser.cpp        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -31,7 +31,7 @@
</span><span class="cx"> #include &quot;JSCJSValueInlines.h&quot;
</span><span class="cx"> #include &quot;JSWASMModule.h&quot;
</span><span class="cx"> #include &quot;WASMFunctionCompiler.h&quot;
</span><del>-#include &quot;WASMFunctionLLVMIRGenerator.h&quot;
</del><ins>+#include &quot;WASMFunctionB3IRGenerator.h&quot;
</ins><span class="cx"> #include &quot;WASMFunctionSyntaxChecker.h&quot;
</span><span class="cx"> 
</span><span class="cx"> #define PROPAGATE_ERROR() do { if (!m_errorMessage.isNull()) return 0; } while (0)
</span></span></pre></div>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Tools/ChangeLog        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -1,3 +1,12 @@
</span><ins>+2016-02-18  Filip Pizlo  &lt;fpizlo@apple.com&gt;
+
+        Remove remaining references to LLVM, and make sure comments refer to the backend as &quot;B3&quot; not &quot;LLVM&quot;
+        https://bugs.webkit.org/show_bug.cgi?id=154383
+
+        Reviewed by Saam Barati.
+
+        * Scripts/run-jsc-stress-tests:
+
</ins><span class="cx"> 2016-02-17  Filip Pizlo  &lt;fpizlo@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         Remove LLVM dependencies from WebKit
</span></span></pre></div>
<a id="trunkToolsScriptsrunjscstresstests"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/run-jsc-stress-tests (196755 => 196756)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/run-jsc-stress-tests        2016-02-18 16:12:23 UTC (rev 196755)
+++ trunk/Tools/Scripts/run-jsc-stress-tests        2016-02-18 16:40:25 UTC (rev 196756)
</span><span class="lines">@@ -398,7 +398,7 @@
</span><span class="cx"> BASE_OPTIONS = [&quot;--useFTLJIT=false&quot;, &quot;--useFunctionDotArguments=true&quot;]
</span><span class="cx"> EAGER_OPTIONS = [&quot;--thresholdForJITAfterWarmUp=10&quot;, &quot;--thresholdForJITSoon=10&quot;, &quot;--thresholdForOptimizeAfterWarmUp=20&quot;, &quot;--thresholdForOptimizeAfterLongWarmUp=20&quot;, &quot;--thresholdForOptimizeSoon=20&quot;, &quot;--thresholdForFTLOptimizeAfterWarmUp=20&quot;, &quot;--thresholdForFTLOptimizeSoon=20&quot;, &quot;--maximumEvalCacheableSourceLength=150000&quot;]
</span><span class="cx"> NO_CJIT_OPTIONS = [&quot;--useConcurrentJIT=false&quot;, &quot;--thresholdForJITAfterWarmUp=100&quot;]
</span><del>-FTL_OPTIONS = [&quot;--useFTLJIT=true&quot;, &quot;--ftlCrashesIfCantInitializeLLVM=true&quot;]
</del><ins>+FTL_OPTIONS = [&quot;--useFTLJIT=true&quot;]
</ins><span class="cx"> 
</span><span class="cx"> $runlist = []
</span><span class="cx"> 
</span><span class="lines">@@ -823,10 +823,6 @@
</span><span class="cx">     run(&quot;no-cjit-no-aso&quot;, &quot;--useArchitectureSpecificOptimizations=false&quot;, *NO_CJIT_OPTIONS)
</span><span class="cx"> end
</span><span class="cx"> 
</span><del>-def runFTLNoCJITNoSimpleOpt
-    run(&quot;ftl-no-cjit-no-simple-opt&quot;, &quot;--llvmSimpleOpt=false&quot;, *(FTL_OPTIONS + NO_CJIT_OPTIONS)) if $enableFTL
-end
-
</del><span class="cx"> def runNoCJITNoAccessInlining
</span><span class="cx">     run(&quot;no-cjit-no-access-inlining&quot;, &quot;--useAccessInlining=false&quot;, *NO_CJIT_OPTIONS)
</span><span class="cx"> end
</span><span class="lines">@@ -888,7 +884,6 @@
</span><span class="cx"> 
</span><span class="cx"> def defaultSpotCheckNoMaximalFlush
</span><span class="cx">     defaultQuickRun
</span><del>-    runFTLNoCJITNoSimpleOpt
</del><span class="cx">     runFTLNoCJITOSRValidation
</span><span class="cx">     runNoCJITNoAccessInlining
</span><span class="cx">     runFTLNoCJITNoAccessInlining
</span></span></pre>
</div>
</div>

</body>
</html>