<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[212813] trunk/Source/JavaScriptCore</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/212813">212813</a></dd>
<dt>Author</dt> <dd>sbarati@apple.com</dd>
<dt>Date</dt> <dd>2017-02-22 00:04:18 -0800 (Wed, 22 Feb 2017)</dd>
</dl>

<h3>Log Message</h3>
<pre>Unreviewed. Rename AirGraphColoring.* files to AirAllocateRegistersByGraphColoring.* to be more consistent with the rest of the Air file names.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* b3/air/AirAllocateRegistersByGraphColoring.cpp: Copied from Source/JavaScriptCore/b3/air/AirGraphColoring.cpp.
* b3/air/AirAllocateRegistersByGraphColoring.h: Copied from Source/JavaScriptCore/b3/air/AirGraphColoring.h.
* b3/air/AirGenerate.cpp:
* b3/air/AirGraphColoring.cpp: Removed.
* b3/air/AirGraphColoring.h: Removed.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreCMakeListstxt">trunk/Source/JavaScriptCore/CMakeLists.txt</a></li>
<li><a href="#trunkSourceJavaScriptCoreChangeLog">trunk/Source/JavaScriptCore/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj">trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj</a></li>
<li><a href="#trunkSourceJavaScriptCoreb3airAirGeneratecpp">trunk/Source/JavaScriptCore/b3/air/AirGenerate.cpp</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreb3airAirAllocateRegistersByGraphColoringcpp">trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreb3airAirAllocateRegistersByGraphColoringh">trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.h</a></li>
</ul>

<h3>Removed Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreb3airAirGraphColoringcpp">trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreb3airAirGraphColoringh">trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceJavaScriptCoreCMakeListstxt"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/CMakeLists.txt (212812 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/CMakeLists.txt        2017-02-22 07:59:59 UTC (rev 212812)
+++ trunk/Source/JavaScriptCore/CMakeLists.txt        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -72,6 +72,7 @@
</span><span class="cx">     assembler/MacroAssemblerPrinter.cpp
</span><span class="cx">     assembler/MacroAssemblerX86Common.cpp
</span><span class="cx"> 
</span><ins>+    b3/air/AirAllocateRegistersByGraphColoring.cpp
</ins><span class="cx">     b3/air/AirAllocateStack.cpp
</span><span class="cx">     b3/air/AirArg.cpp
</span><span class="cx">     b3/air/AirBasicBlock.cpp
</span><span class="lines">@@ -87,7 +88,6 @@
</span><span class="cx">     b3/air/AirFixPartialRegisterStalls.cpp
</span><span class="cx">     b3/air/AirGenerate.cpp
</span><span class="cx">     b3/air/AirGenerated.cpp
</span><del>-    b3/air/AirGraphColoring.cpp
</del><span class="cx">     b3/air/AirHandleCalleeSaves.cpp
</span><span class="cx">     b3/air/AirInsertionSet.cpp
</span><span class="cx">     b3/air/AirInst.cpp
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ChangeLog (212812 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ChangeLog        2017-02-22 07:59:59 UTC (rev 212812)
+++ trunk/Source/JavaScriptCore/ChangeLog        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -1,3 +1,15 @@
</span><ins>+2017-02-22  Saam Barati  &lt;sbarati@apple.com&gt;
+
+        Unreviewed. Rename AirGraphColoring.* files to AirAllocateRegistersByGraphColoring.* to be more consistent with the rest of the Air file names.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * b3/air/AirAllocateRegistersByGraphColoring.cpp: Copied from Source/JavaScriptCore/b3/air/AirGraphColoring.cpp.
+        * b3/air/AirAllocateRegistersByGraphColoring.h: Copied from Source/JavaScriptCore/b3/air/AirGraphColoring.h.
+        * b3/air/AirGenerate.cpp:
+        * b3/air/AirGraphColoring.cpp: Removed.
+        * b3/air/AirGraphColoring.h: Removed.
+
</ins><span class="cx"> 2017-02-21  Youenn Fablet  &lt;youenn@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         [WebRTC][Mac] Activate libwebrtc
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (212812 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2017-02-22 07:59:59 UTC (rev 212812)
+++ trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -1451,8 +1451,8 @@
</span><span class="cx">                 795B19981D78BE3500262FA0 /* MapBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 795B19961D78BE3500262FA0 /* MapBase.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 795F099D1E03600500BBE37F /* B3Compile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 795F099C1E03600500BBE37F /* B3Compile.cpp */; };
</span><span class="cx">                 7964656A1B952FF0003059EE /* GetPutInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 796465681B952FF0003059EE /* GetPutInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><del>-                7965A1191E5D3FFE001BA50D /* AirGraphColoring.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7965A1171E5D3FFE001BA50D /* AirGraphColoring.cpp */; };
-                7965A11A1E5D3FFE001BA50D /* AirGraphColoring.h in Headers */ = {isa = PBXBuildFile; fileRef = 7965A1181E5D3FFE001BA50D /* AirGraphColoring.h */; settings = {ATTRIBUTES = (Private, ); }; };
</del><ins>+                7965C2161E5D799600B7591D /* AirAllocateRegistersByGraphColoring.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7965C2141E5D799600B7591D /* AirAllocateRegistersByGraphColoring.cpp */; };
+                7965C2171E5D799600B7591D /* AirAllocateRegistersByGraphColoring.h in Headers */ = {isa = PBXBuildFile; fileRef = 7965C2151E5D799600B7591D /* AirAllocateRegistersByGraphColoring.h */; settings = {ATTRIBUTES = (Private, ); }; };
</ins><span class="cx">                 796FB43A1DFF8C3F0039C95D /* JSWebAssemblyHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 796FB4391DFF8C3F0039C95D /* JSWebAssemblyHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 797E07A91B8FCFB9008400BA /* JSGlobalLexicalEnvironment.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 797E07A71B8FCFB9008400BA /* JSGlobalLexicalEnvironment.cpp */; };
</span><span class="cx">                 797E07AA1B8FCFB9008400BA /* JSGlobalLexicalEnvironment.h in Headers */ = {isa = PBXBuildFile; fileRef = 797E07A81B8FCFB9008400BA /* JSGlobalLexicalEnvironment.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="lines">@@ -3946,8 +3946,8 @@
</span><span class="cx">                 795B19961D78BE3500262FA0 /* MapBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MapBase.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 795F099C1E03600500BBE37F /* B3Compile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3Compile.cpp; path = b3/B3Compile.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 796465681B952FF0003059EE /* GetPutInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GetPutInfo.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><del>-                7965A1171E5D3FFE001BA50D /* AirGraphColoring.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirGraphColoring.cpp; path = b3/air/AirGraphColoring.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
-                7965A1181E5D3FFE001BA50D /* AirGraphColoring.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirGraphColoring.h; path = b3/air/AirGraphColoring.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</del><ins>+                7965C2141E5D799600B7591D /* AirAllocateRegistersByGraphColoring.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirAllocateRegistersByGraphColoring.cpp; path = b3/air/AirAllocateRegistersByGraphColoring.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
+                7965C2151E5D799600B7591D /* AirAllocateRegistersByGraphColoring.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirAllocateRegistersByGraphColoring.h; path = b3/air/AirAllocateRegistersByGraphColoring.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</ins><span class="cx">                 796FB4391DFF8C3F0039C95D /* JSWebAssemblyHelpers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = JSWebAssemblyHelpers.h; path = js/JSWebAssemblyHelpers.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 797E07A71B8FCFB9008400BA /* JSGlobalLexicalEnvironment.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSGlobalLexicalEnvironment.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 797E07A81B8FCFB9008400BA /* JSGlobalLexicalEnvironment.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSGlobalLexicalEnvironment.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="lines">@@ -5446,6 +5446,8 @@
</span><span class="cx">                 0FEC84B31BDACD880080FF74 /* air */ = {
</span><span class="cx">                         isa = PBXGroup;
</span><span class="cx">                         children = (
</span><ins>+                                7965C2141E5D799600B7591D /* AirAllocateRegistersByGraphColoring.cpp */,
+                                7965C2151E5D799600B7591D /* AirAllocateRegistersByGraphColoring.h */,
</ins><span class="cx">                                 0FEC85481BDACDC70080FF74 /* AirAllocateStack.cpp */,
</span><span class="cx">                                 0FEC85491BDACDC70080FF74 /* AirAllocateStack.h */,
</span><span class="cx">                                 0FEC854A1BDACDC70080FF74 /* AirArg.cpp */,
</span><span class="lines">@@ -5479,8 +5481,6 @@
</span><span class="cx">                                 0FEC85541BDACDC70080FF74 /* AirGenerate.h */,
</span><span class="cx">                                 0FEC85921BDB1E100080FF74 /* AirGenerated.cpp */,
</span><span class="cx">                                 0FEC85551BDACDC70080FF74 /* AirGenerationContext.h */,
</span><del>-                                7965A1171E5D3FFE001BA50D /* AirGraphColoring.cpp */,
-                                7965A1181E5D3FFE001BA50D /* AirGraphColoring.h */,
</del><span class="cx">                                 0FEC85561BDACDC70080FF74 /* AirHandleCalleeSaves.cpp */,
</span><span class="cx">                                 0FEC85571BDACDC70080FF74 /* AirHandleCalleeSaves.h */,
</span><span class="cx">                                 0FEC85581BDACDC70080FF74 /* AirInsertionSet.cpp */,
</span><span class="lines">@@ -8665,7 +8665,6 @@
</span><span class="cx">                                 8B9F6D561D5912FA001C739F /* IterationKind.h in Headers */,
</span><span class="cx">                                 FE4D55B81AE716CA0052E459 /* IterationStatus.h in Headers */,
</span><span class="cx">                                 70113D4C1A8DB093003848C4 /* IteratorOperations.h in Headers */,
</span><del>-                                7965A11A1E5D3FFE001BA50D /* AirGraphColoring.h in Headers */,
</del><span class="cx">                                 70DC3E0A1B2DF2C700054299 /* IteratorPrototype.h in Headers */,
</span><span class="cx">                                 BC18C4130E16F5CD00B34460 /* JavaScript.h in Headers */,
</span><span class="cx">                                 A503FA1A188E0FB000110F14 /* JavaScriptCallFrame.h in Headers */,
</span><span class="lines">@@ -9249,6 +9248,7 @@
</span><span class="cx">                                 AD2FCBF31DB58DAD00B3E736 /* WebAssemblyInstancePrototype.h in Headers */,
</span><span class="cx">                                 AD2FCC191DB59CB200B3E736 /* WebAssemblyInstancePrototype.lut.h in Headers */,
</span><span class="cx">                                 ADE8029A1E08F1DE0058DE78 /* WebAssemblyLinkErrorConstructor.h in Headers */,
</span><ins>+                                7965C2171E5D799600B7591D /* AirAllocateRegistersByGraphColoring.h in Headers */,
</ins><span class="cx">                                 ADE8029C1E08F1DE0058DE78 /* WebAssemblyLinkErrorPrototype.h in Headers */,
</span><span class="cx">                                 AD2FCBF51DB58DAD00B3E736 /* WebAssemblyMemoryConstructor.h in Headers */,
</span><span class="cx">                                 AD2FCC1A1DB59CB200B3E736 /* WebAssemblyMemoryConstructor.lut.h in Headers */,
</span><span class="lines">@@ -9979,7 +9979,6 @@
</span><span class="cx">                                 0FD3E4011B618AAF00C80E1E /* DFGAdaptiveInferredPropertyValueWatchpoint.cpp in Sources */,
</span><span class="cx">                                 0F18D3CF1B55A6E0002C5C9F /* DFGAdaptiveStructureWatchpoint.cpp in Sources */,
</span><span class="cx">                                 0F2DD8111AB3D8BE00BBB8E8 /* DFGArgumentsEliminationPhase.cpp in Sources */,
</span><del>-                                7965A1191E5D3FFE001BA50D /* AirGraphColoring.cpp in Sources */,
</del><span class="cx">                                 0F2DD8131AB3D8BE00BBB8E8 /* DFGArgumentsUtilities.cpp in Sources */,
</span><span class="cx">                                 0F485321187750560083B687 /* DFGArithMode.cpp in Sources */,
</span><span class="cx">                                 0F63948415E48118006A597C /* DFGArrayMode.cpp in Sources */,
</span><span class="lines">@@ -10680,6 +10679,7 @@
</span><span class="cx">                                 ADBC54D41DF8EA2B005BF738 /* WebAssemblyToJSCallee.cpp in Sources */,
</span><span class="cx">                                 0FC8150B14043C0E00CFA603 /* WriteBarrierSupport.cpp in Sources */,
</span><span class="cx">                                 A7E5AB3A1799E4B200D2833D /* X86Disassembler.cpp in Sources */,
</span><ins>+                                7965C2161E5D799600B7591D /* AirAllocateRegistersByGraphColoring.cpp in Sources */,
</ins><span class="cx">                                 863C6D9C1521111A00585E4E /* YarrCanonicalizeUCS2.cpp in Sources */,
</span><span class="cx">                                 65FB63A41C8EA09C0020719B /* YarrCanonicalizeUnicode.cpp in Sources */,
</span><span class="cx">                                 86704B8412DBA33700A9FE7B /* YarrInterpreter.cpp in Sources */,
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirAllocateRegistersByGraphColoringcppfromrev212812trunkSourceJavaScriptCoreb3airAirGraphColoringcpp"></a>
<div class="copfile"><h4>Copied: trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.cpp (from rev 212812, trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.cpp) (0 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.cpp                                (rev 0)
+++ trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.cpp        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -0,0 +1,2133 @@
</span><ins>+/*
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include &quot;config.h&quot;
+#include &quot;AirAllocateRegistersByGraphColoring.h&quot;
+
+#if ENABLE(B3_JIT)
+
+#include &quot;AirCode.h&quot;
+#include &quot;AirInsertionSet.h&quot;
+#include &quot;AirInstInlines.h&quot;
+#include &quot;AirLiveness.h&quot;
+#include &quot;AirPadInterference.h&quot;
+#include &quot;AirPhaseScope.h&quot;
+#include &quot;AirTmpInlines.h&quot;
+#include &quot;AirTmpWidth.h&quot;
+#include &quot;AirUseCounts.h&quot;
+#include &lt;wtf/ListDump.h&gt;
+
+namespace JSC { namespace B3 { namespace Air {
+
+namespace {
+
+bool debug = false;
+bool traceDebug = false;
+bool reportStats = false;
+
+// The AbstractColoringAllocator defines all the code that is independant
+// from the type or register and can be shared when allocating registers.
+template&lt;typename IndexType, typename TmpMapper&gt;
+class AbstractColoringAllocator {
+public:
+    AbstractColoringAllocator(Code&amp; code, const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize, const HashSet&lt;unsigned&gt;&amp; unspillableTmps, const UseCounts&lt;Tmp&gt;&amp; useCounts)
+        : m_regsInPriorityOrder(regsInPriorityOrder)
+        , m_lastPrecoloredRegisterIndex(lastPrecoloredRegisterIndex)
+        , m_unspillableTmps(unspillableTmps)
+        , m_useCounts(useCounts)
+        , m_code(code)
+    {
+        for (Reg reg : m_regsInPriorityOrder)
+            m_mutableRegs.set(reg);
+        
+        initializeDegrees(tmpArraySize);
+        
+        m_adjacencyList.resize(tmpArraySize);
+        m_moveList.resize(tmpArraySize);
+        m_coalescedTmps.fill(0, tmpArraySize);
+        m_isOnSelectStack.ensureSize(tmpArraySize);
+        m_spillWorklist.ensureSize(tmpArraySize);
+    }
+
+protected:
+
+    unsigned registerCount() const { return m_regsInPriorityOrder.size(); }
+
+    IndexType getAlias(IndexType tmpIndex) const
+    {
+        IndexType alias = tmpIndex;
+        while (IndexType nextAlias = m_coalescedTmps[alias])
+            alias = nextAlias;
+        return alias;
+    }
+
+    void addEdge(IndexType a, IndexType b)
+    {
+        if (a == b)
+            return;
+        addEdgeDistinct(a, b);
+    }
+
+    void addToSpill(unsigned toSpill)
+    {
+        if (m_unspillableTmps.contains(toSpill))
+            return;
+
+        m_spillWorklist.add(toSpill);
+    }
+
+    bool isPrecolored(IndexType tmpIndex)
+    {
+        return tmpIndex &lt;= m_lastPrecoloredRegisterIndex;
+    }
+
+    void initializeDegrees(unsigned tmpArraySize)
+    {
+        m_degrees.resize(tmpArraySize);
+
+        // All precolored registers have  an &quot;infinite&quot; degree.
+        unsigned firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        for (unsigned i = 0; i &lt; firstNonRegIndex; ++i)
+            m_degrees[i] = std::numeric_limits&lt;unsigned&gt;::max();
+
+        memset(m_degrees.data() + firstNonRegIndex, 0, (tmpArraySize - firstNonRegIndex) * sizeof(unsigned));
+    }
+
+    void addEdgeDistinct(IndexType a, IndexType b)
+    {
+        ASSERT(a != b);
+        bool isNewEdge = addInterferenceEdge(InterferenceEdge(a, b));
+        if (isNewEdge) {
+            if (!isPrecolored(a)) {
+                ASSERT(!m_adjacencyList[a].contains(b));
+                m_adjacencyList[a].append(b);
+                m_degrees[a]++;
+            }
+
+            if (!isPrecolored(b)) {
+                ASSERT(!m_adjacencyList[b].contains(a));
+                m_adjacencyList[b].append(a);
+                m_degrees[b]++;
+            }
+        }
+    }
+
+    bool addEdgeDistinctWithoutDegreeChange(IndexType a, IndexType b)
+    {
+        ASSERT(a != b);
+        bool isNewEdge = addInterferenceEdge(InterferenceEdge(a, b));
+        if (isNewEdge) {
+            if (!isPrecolored(a)) {
+                ASSERT(!m_adjacencyList[a].contains(b));
+                m_adjacencyList[a].append(b);
+            }
+
+            if (!isPrecolored(b)) {
+                ASSERT(!m_adjacencyList[b].contains(a));
+                m_adjacencyList[b].append(a);
+            }
+            return true;
+        }
+        return false;
+    }
+
+    template&lt;typename Function&gt;
+    void forEachAdjacent(IndexType tmpIndex, Function function)
+    {
+        for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
+            if (!hasBeenSimplified(adjacentTmpIndex))
+                function(adjacentTmpIndex);
+        }
+    }
+
+    bool hasBeenSimplified(IndexType tmpIndex)
+    {
+        if (!ASSERT_DISABLED) {
+            if (!!m_coalescedTmps[tmpIndex])
+                ASSERT(getAlias(tmpIndex) != tmpIndex);
+        }
+
+        return m_isOnSelectStack.quickGet(tmpIndex) || !!m_coalescedTmps[tmpIndex];
+    }
+
+    bool canBeSafelyCoalesced(IndexType u, IndexType v)
+    {
+        ASSERT(!isPrecolored(v));
+        if (isPrecolored(u))
+            return precoloredCoalescingHeuristic(u, v);
+        return conservativeHeuristic(u, v);
+    }
+
+    bool conservativeHeuristic(IndexType u, IndexType v)
+    {
+        // This is using the Briggs' conservative coalescing rule:
+        // If the number of combined adjacent node with a degree &gt;= K is less than K,
+        // it is safe to combine the two nodes. The reason is that we know that if the graph
+        // is colorable, we have fewer than K adjacents with high order and there is a color
+        // for the current node.
+        ASSERT(u != v);
+        ASSERT(!isPrecolored(u));
+        ASSERT(!isPrecolored(v));
+
+        const auto&amp; adjacentsOfU = m_adjacencyList[u];
+        const auto&amp; adjacentsOfV = m_adjacencyList[v];
+
+        if (adjacentsOfU.size() + adjacentsOfV.size() &lt; registerCount()) {
+            // Shortcut: if the total number of adjacents is less than the number of register, the condition is always met.
+            return true;
+        }
+
+        HashSet&lt;IndexType&gt; highOrderAdjacents;
+
+        for (IndexType adjacentTmpIndex : adjacentsOfU) {
+            ASSERT(adjacentTmpIndex != v);
+            ASSERT(adjacentTmpIndex != u);
+            if (!hasBeenSimplified(adjacentTmpIndex) &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= registerCount()) {
+                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
+                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= registerCount())
+                    return false;
+            }
+        }
+        for (IndexType adjacentTmpIndex : adjacentsOfV) {
+            ASSERT(adjacentTmpIndex != u);
+            ASSERT(adjacentTmpIndex != v);
+            if (!hasBeenSimplified(adjacentTmpIndex) &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= registerCount()) {
+                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
+                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= registerCount())
+                    return false;
+            }
+        }
+
+        ASSERT(highOrderAdjacents.size() &lt; registerCount());
+        return true;
+    }
+
+    bool precoloredCoalescingHeuristic(IndexType u, IndexType v)
+    {
+        if (traceDebug)
+            dataLog(&quot;    Checking precoloredCoalescingHeuristic\n&quot;);
+        ASSERT(isPrecolored(u));
+        ASSERT(!isPrecolored(v));
+        
+        // If u is a pinned register then it's always safe to coalesce. Note that when we call this,
+        // we have already proved that there is no interference between u and v.
+        if (!m_mutableRegs.get(m_coloredTmp[u]))
+            return true;
+
+        // If any adjacent of the non-colored node is not an adjacent of the colored node AND has a degree &gt;= K
+        // there is a risk that this node needs to have the same color as our precolored node. If we coalesce such
+        // move, we may create an uncolorable graph.
+        const auto&amp; adjacentsOfV = m_adjacencyList[v];
+        for (unsigned adjacentTmpIndex : adjacentsOfV) {
+            if (!isPrecolored(adjacentTmpIndex)
+                &amp;&amp; !hasBeenSimplified(adjacentTmpIndex)
+                &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= registerCount()
+                &amp;&amp; !hasInterferenceEdge(InterferenceEdge(u, adjacentTmpIndex)))
+                return false;
+        }
+        return true;
+    }
+
+    IndexType selectSpill()
+    {
+        if (!m_hasSelectedSpill) {
+            m_hasSelectedSpill = true;
+
+            if (m_hasCoalescedNonTrivialMove)
+                m_coalescedTmpsAtSpill = m_coalescedTmps;
+        }
+
+        auto iterator = m_spillWorklist.begin();
+
+        RELEASE_ASSERT_WITH_MESSAGE(iterator != m_spillWorklist.end(), &quot;selectSpill() called when there was no spill.&quot;);
+        RELEASE_ASSERT_WITH_MESSAGE(!m_unspillableTmps.contains(*iterator), &quot;trying to spill unspillable tmp&quot;);
+
+        // Higher score means more desirable to spill. Lower scores maximize the likelihood that a tmp
+        // gets a register.
+        auto score = [&amp;] (Tmp tmp) -&gt; double {
+            // Air exposes the concept of &quot;fast tmps&quot;, and we interpret that to mean that the tmp
+            // should always be in a register.
+            if (m_code.isFastTmp(tmp))
+                return 0;
+            
+            // All else being equal, the score should be directly related to the degree.
+            double degree = static_cast&lt;double&gt;(m_degrees[TmpMapper::absoluteIndex(tmp)]);
+
+            // All else being equal, the score should be inversely related to the number of warm uses and
+            // defs.
+            const UseCounts&lt;Tmp&gt;::Counts* counts = m_useCounts[tmp];
+            if (!counts)
+                return std::numeric_limits&lt;double&gt;::infinity();
+            
+            double uses = counts-&gt;numWarmUses + counts-&gt;numDefs;
+
+            // If it's a constant, then it's not as bad to spill. We can rematerialize it in many
+            // cases.
+            if (counts-&gt;numConstDefs == 1 &amp;&amp; counts-&gt;numDefs == 1)
+                uses /= 2;
+
+            return degree / uses;
+        };
+
+        auto victimIterator = iterator;
+        double maxScore = score(TmpMapper::tmpFromAbsoluteIndex(*iterator));
+
+        ++iterator;
+        for (;iterator != m_spillWorklist.end(); ++iterator) {
+            double tmpScore = score(TmpMapper::tmpFromAbsoluteIndex(*iterator));
+            if (tmpScore &gt; maxScore) {
+                ASSERT(!m_unspillableTmps.contains(*iterator));
+                victimIterator = iterator;
+                maxScore = tmpScore;
+            }
+        }
+
+        IndexType victimIndex = *victimIterator;
+        if (traceDebug)
+            dataLogLn(&quot;Selecting spill &quot;, victimIndex);
+        ASSERT(!isPrecolored(victimIndex));
+        return victimIndex;
+    }
+
+    void assignColors()
+    {
+        ASSERT(m_simplifyWorklist.isEmpty());
+        ASSERT(!m_spillWorklist.bitCount());
+
+        // Reclaim as much memory as possible.
+        m_interferenceEdges.clear();
+
+        m_degrees.clear();
+        m_moveList.clear();
+        m_simplifyWorklist.clear();
+        m_spillWorklist.clearAll();
+
+        // Try to color the Tmp on the stack.
+        m_coloredTmp.resize(m_adjacencyList.size());
+
+        while (!m_selectStack.isEmpty()) {
+            unsigned tmpIndex = m_selectStack.takeLast();
+            m_isOnSelectStack.quickClear(tmpIndex);
+            ASSERT(!isPrecolored(tmpIndex));
+            ASSERT(!m_coloredTmp[tmpIndex]);
+            RELEASE_ASSERT(getAlias(tmpIndex) == tmpIndex);
+
+            RegisterSet coloredRegisters;
+            for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
+                IndexType aliasTmpIndex = getAlias(adjacentTmpIndex);
+                Reg reg = m_coloredTmp[aliasTmpIndex];
+
+                ASSERT(!isPrecolored(aliasTmpIndex) || (isPrecolored(aliasTmpIndex) &amp;&amp; reg));
+
+                if (reg)
+                    coloredRegisters.set(reg);
+            }
+
+            bool colorAssigned = false;
+            for (Reg reg : m_regsInPriorityOrder) {
+                if (!coloredRegisters.get(reg)) {
+                    m_coloredTmp[tmpIndex] = reg;
+                    colorAssigned = true;
+                    break;
+                }
+            }
+
+            if (!colorAssigned)
+                m_spilledTmps.append(tmpIndex);
+        }
+
+        m_selectStack.clear();
+
+        if (m_spilledTmps.isEmpty())
+            m_coalescedTmpsAtSpill.clear();
+        else
+            m_coloredTmp.clear();
+    }
+
+    void dumpInterferenceGraphInDot(PrintStream&amp; out)
+    {
+        out.print(&quot;graph InterferenceGraph { \n&quot;);
+
+        HashSet&lt;Tmp&gt; tmpsWithInterferences;
+        for (const auto&amp; edge : m_interferenceEdges) {
+            tmpsWithInterferences.add(TmpMapper::tmpFromAbsoluteIndex(edge.first()));
+            tmpsWithInterferences.add(TmpMapper::tmpFromAbsoluteIndex(edge.second()));
+        }
+
+        for (const auto&amp; tmp : tmpsWithInterferences) {
+            unsigned tmpIndex = TmpMapper::absoluteIndex(tmp);
+            if (tmpIndex &lt; m_degrees.size())
+                out.print(&quot;    &quot;, tmp.internalValue(), &quot; [label=\&quot;&quot;, tmp, &quot; (&quot;, m_degrees[tmpIndex], &quot;)\&quot;];\n&quot;);
+            else
+                out.print(&quot;    &quot;, tmp.internalValue(), &quot; [label=\&quot;&quot;, tmp, &quot;\&quot;];\n&quot;);
+        }
+
+        for (const auto&amp; edge : m_interferenceEdges)
+            out.print(&quot;    &quot;, edge.first(), &quot; -- &quot;, edge.second(), &quot;;\n&quot;);
+        out.print(&quot;}\n&quot;);
+    }
+
+    // Interference edges are not directed. An edge between any two Tmps is represented
+    // by the concatenated values of the smallest Tmp followed by the bigger Tmp.
+    class InterferenceEdge {
+    public:
+        InterferenceEdge()
+        {
+        }
+
+        InterferenceEdge(IndexType a, IndexType b)
+        {
+            ASSERT(a);
+            ASSERT(b);
+            ASSERT_WITH_MESSAGE(a != b, &quot;A Tmp can never interfere with itself. Doing so would force it to be the superposition of two registers.&quot;);
+
+            if (b &lt; a)
+                std::swap(a, b);
+            m_value = static_cast&lt;uint64_t&gt;(a) &lt;&lt; 32 | b;
+        }
+
+        InterferenceEdge(WTF::HashTableDeletedValueType)
+            : m_value(std::numeric_limits&lt;uint64_t&gt;::max())
+        {
+        }
+
+        IndexType first() const
+        {
+            return m_value &gt;&gt; 32 &amp; 0xffffffff;
+        }
+
+        IndexType second() const
+        {
+            return m_value &amp; 0xffffffff;
+        }
+
+        bool operator==(const InterferenceEdge other) const
+        {
+            return m_value == other.m_value;
+        }
+
+        bool isHashTableDeletedValue() const
+        {
+            return *this == InterferenceEdge(WTF::HashTableDeletedValue);
+        }
+
+        unsigned hash() const
+        {
+            return WTF::IntHash&lt;uint64_t&gt;::hash(m_value);
+        }
+
+        void dump(PrintStream&amp; out) const
+        {
+            out.print(first(), &quot;&lt;=&gt;&quot;, second());
+        }
+
+    private:
+        uint64_t m_value { 0 };
+    };
+
+    bool addInterferenceEdge(InterferenceEdge edge)
+    {
+        return m_interferenceEdges.add(edge).isNewEntry;
+    }
+
+    bool hasInterferenceEdge(InterferenceEdge edge)
+    {
+        return m_interferenceEdges.contains(edge);
+    }
+
+    struct InterferenceEdgeHash {
+        static unsigned hash(const InterferenceEdge&amp; key) { return key.hash(); }
+        static bool equal(const InterferenceEdge&amp; a, const InterferenceEdge&amp; b) { return a == b; }
+        static const bool safeToCompareToEmptyOrDeleted = true;
+    };
+    typedef SimpleClassHashTraits&lt;InterferenceEdge&gt; InterferenceEdgeHashTraits;
+
+    const Vector&lt;Reg&gt;&amp; m_regsInPriorityOrder;
+    RegisterSet m_mutableRegs;
+    IndexType m_lastPrecoloredRegisterIndex { 0 };
+
+    // The interference graph.
+    HashSet&lt;InterferenceEdge, InterferenceEdgeHash, InterferenceEdgeHashTraits&gt; m_interferenceEdges;
+
+    Vector&lt;Vector&lt;IndexType, 0, UnsafeVectorOverflow, 4&gt;, 0, UnsafeVectorOverflow&gt; m_adjacencyList;
+    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_degrees;
+
+    // Instead of keeping track of the move instructions, we just keep their operands around and use the index
+    // in the vector as the &quot;identifier&quot; for the move.
+    struct MoveOperands {
+        IndexType srcIndex;
+        IndexType dstIndex;
+    };
+    Vector&lt;MoveOperands, 0, UnsafeVectorOverflow&gt; m_coalescingCandidates;
+
+    // List of every move instruction associated with a Tmp.
+    Vector&lt;HashSet&lt;IndexType, typename DefaultHash&lt;IndexType&gt;::Hash, WTF::UnsignedWithZeroKeyHashTraits&lt;IndexType&gt;&gt;&gt; m_moveList;
+
+    // Colors.
+    Vector&lt;Reg, 0, UnsafeVectorOverflow&gt; m_coloredTmp;
+    Vector&lt;IndexType&gt; m_spilledTmps;
+
+    // Values that have been coalesced with an other value.
+    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_coalescedTmps;
+
+    // The stack of Tmp removed from the graph and ready for coloring.
+    BitVector m_isOnSelectStack;
+    Vector&lt;IndexType&gt; m_selectStack;
+
+    IndexType m_framePointerIndex { 0 };
+    BitVector m_interferesWithFramePointer;
+    // Low-degree, non-Move related.
+    Vector&lt;IndexType&gt; m_simplifyWorklist;
+    // High-degree Tmp.
+    BitVector m_spillWorklist;
+
+    bool m_hasSelectedSpill { false };
+    bool m_hasCoalescedNonTrivialMove { false };
+
+    // The mapping of Tmp to their alias for Moves that are always coalescing regardless of spilling.
+    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_coalescedTmpsAtSpill;
+
+    const HashSet&lt;unsigned&gt;&amp; m_unspillableTmps;
+    const UseCounts&lt;Tmp&gt;&amp; m_useCounts;
+    Code&amp; m_code;
+};
+
+template &lt;typename IndexType, typename TmpMapper&gt;
+class Briggs : public AbstractColoringAllocator&lt;IndexType, TmpMapper&gt; {
+    using Base = AbstractColoringAllocator&lt;IndexType, TmpMapper&gt;;
+protected:
+    using Base::m_isOnSelectStack;
+    using Base::m_selectStack;
+    using Base::m_framePointerIndex;
+    using Base::m_interferesWithFramePointer;
+    using Base::m_simplifyWorklist;
+    using Base::m_spillWorklist;
+    using Base::m_hasSelectedSpill;
+    using Base::m_hasCoalescedNonTrivialMove;
+    using Base::m_degrees;
+    using Base::registerCount;
+    using Base::m_coalescedTmps;
+    using Base::m_moveList;
+    using Base::m_useCounts;
+    using Base::m_coalescedTmpsAtSpill;
+    using Base::m_spilledTmps;
+    using MoveOperands = typename Base::MoveOperands;
+    using Base::m_coalescingCandidates;
+    using Base::m_lastPrecoloredRegisterIndex;
+    using Base::m_coloredTmp;
+    using Base::m_code;
+    using InterferenceEdge = typename Base::InterferenceEdge;
+    using Base::m_unspillableTmps;
+    using Base::hasInterferenceEdge;
+    using Base::getAlias;
+    using Base::addEdge;
+    using Base::isPrecolored;
+    using Base::canBeSafelyCoalesced;
+    using Base::addEdgeDistinctWithoutDegreeChange;
+    using Base::forEachAdjacent;
+    using Base::hasBeenSimplified;
+    using Base::addToSpill;
+    using Base::m_interferenceEdges;
+
+public:
+    Briggs(Code&amp; code, const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize, const HashSet&lt;unsigned&gt;&amp; unspillableTmps, const UseCounts&lt;Tmp&gt;&amp; useCounts)
+        : Base(code, regsInPriorityOrder, lastPrecoloredRegisterIndex, tmpArraySize, unspillableTmps, useCounts)
+    {
+    }
+
+    void allocate()
+    {
+        bool changed = false;
+
+        auto coalesceMove = [&amp;] (unsigned&amp; move) {
+            bool coalesced = coalesce(move);
+            if (coalesced) {
+                changed = true;
+                // We won't need to handle this move in the future since it's already coalesced.
+                move = UINT_MAX;
+            }
+        };
+
+        // We first coalesce until we can't coalesce any more.
+        do {
+            changed = false;
+            m_worklistMoves.forEachMove(coalesceMove);
+        } while (changed);
+        do {
+            changed = false;
+            m_worklistMoves.forEachLowPriorityMove(coalesceMove);
+        } while (changed);
+
+        // Then we create our select stack. The invariant we start with here is that nodes in
+        // the interference graph with degree &gt;= k are on the spill list. Nodes with degree &lt; k
+        // are on the simplify worklist. A node can move from the spill list to the simplify
+        // list (but not the other way around, note that this is different than IRC because IRC
+        // runs this while coalescing, but we do all our coalescing before this). Once a node is
+        // added to the select stack, it's not on either list, but only on the select stack.
+        // Once on the select stack, logically, it's no longer in the interference graph.
+        auto assertInvariants = [&amp;] () {
+            if (ASSERT_DISABLED)
+                return;
+
+            IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+            unsigned registerCount = this-&gt;registerCount();
+            for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
+                if (getAlias(i) != i)
+                    continue;
+                if (m_isOnSelectStack.contains(i)) {
+                    ASSERT(!m_simplifyWorklist.contains(i) &amp;&amp; !m_spillWorklist.contains(i));
+                    continue;
+                }
+                unsigned degree = m_degrees[i];
+                if (degree &gt;= registerCount) {
+                    ASSERT(m_unspillableTmps.contains(i) || m_spillWorklist.contains(i));
+                    ASSERT(!m_simplifyWorklist.contains(i));
+                    continue;
+                }
+                ASSERT(m_simplifyWorklist.contains(i));
+            }
+        };
+
+        makeInitialWorklist();
+        assertInvariants();
+        do {
+            changed = false;
+
+            while (m_simplifyWorklist.size()) {
+                simplify();
+                assertInvariants();
+            }
+
+            if (m_spillWorklist.bitCount()) {
+                selectSpill();
+                changed = true;
+                ASSERT(m_simplifyWorklist.size() == 1);
+            }
+        } while (changed);
+
+        if (!ASSERT_DISABLED) {
+            ASSERT(!m_simplifyWorklist.size());
+            ASSERT(!m_spillWorklist.bitCount());
+            IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+            for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i)
+                ASSERT(hasBeenSimplified(i));
+        }
+
+        assignColors();
+    }
+
+protected:
+    
+    bool coalesce(unsigned&amp; moveIndex)
+    {
+        const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
+        IndexType u = getAlias(moveOperands.srcIndex);
+        IndexType v = getAlias(moveOperands.dstIndex);
+
+        if (isPrecolored(v))
+            std::swap(u, v);
+
+        if (traceDebug)
+            dataLog(&quot;Coalescing move at index &quot;, moveIndex, &quot; u = &quot;, TmpMapper::tmpFromAbsoluteIndex(u), &quot; v = &quot;, TmpMapper::tmpFromAbsoluteIndex(v), &quot;    &quot;);
+
+        if (u == v) {
+            if (traceDebug)
+                dataLog(&quot;Already Coalesced. They're equal.\n&quot;);
+            return false;
+        }
+
+        if (isPrecolored(v)
+            || hasInterferenceEdge(InterferenceEdge(u, v))
+            || (u == m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))) {
+
+            // No need to ever consider this move again if it interferes.
+            // No coalescing will remove the interference.
+            moveIndex = UINT_MAX;
+
+            if (!ASSERT_DISABLED) {
+                if (isPrecolored(v))
+                    ASSERT(isPrecolored(u));
+            }
+
+            if (traceDebug)
+                dataLog(&quot;Constrained\n&quot;);
+
+            return false;
+        }
+        
+        if (canBeSafelyCoalesced(u, v)) {
+            combine(u, v);
+            m_hasCoalescedNonTrivialMove = true;
+
+            if (traceDebug)
+                dataLog(&quot;    Safe Coalescing\n&quot;);
+            return true;
+        }
+
+        if (traceDebug)
+            dataLog(&quot;    Failed coalescing.\n&quot;);
+
+        return false;
+    }
+
+    void combine(IndexType u, IndexType v)
+    {
+        ASSERT(!m_coalescedTmps[v]);
+        m_coalescedTmps[v] = u;
+
+        auto&amp; vMoves = m_moveList[v];
+        m_moveList[u].add(vMoves.begin(), vMoves.end());
+
+        forEachAdjacent(v, [this, u] (IndexType adjacentTmpIndex) {
+            if (addEdgeDistinctWithoutDegreeChange(adjacentTmpIndex, u)) {
+                // If we added a new edge between the adjacentTmp and u, it replaces the edge
+                // that existed with v.
+                // The degree of adjacentTmp remains the same since the edge just changed from u to v.
+                // All we need to do is update the degree of u.
+                if (!isPrecolored(u))
+                    m_degrees[u]++;
+            } else {
+                // If we already had an edge between the adjacentTmp and u, the degree of u
+                // is already correct. The degree of the adjacentTmp decreases since the edge
+                // with v is no longer relevant (we can think of it as merged with the edge with u).
+                decrementDegree(adjacentTmpIndex);
+            }
+        });
+
+        if (m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))
+            m_interferesWithFramePointer.quickSet(u);
+    }
+
+
+    void makeInitialWorklist()
+    {
+        m_simplifyWorklist.clear();
+        m_spillWorklist.clearAll();
+
+        if (traceDebug)
+            dataLogLn(&quot;------------------\nMaking initial worklist&quot;);
+
+        IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        unsigned registerCount = this-&gt;registerCount();
+        for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
+            ASSERT(!isPrecolored(i));
+            if (hasBeenSimplified(i))
+                continue;
+            unsigned degree = m_degrees[i];
+            if (degree &lt; registerCount) {
+                if (traceDebug)
+                    dataLogLn(&quot;Adding &quot;, TmpMapper::tmpFromAbsoluteIndex(i), &quot; to simplify worklist&quot;);
+                m_simplifyWorklist.append(i);
+            } else {
+                if (traceDebug)
+                    dataLogLn(&quot;Adding &quot;, TmpMapper::tmpFromAbsoluteIndex(i), &quot; to spill worklist&quot;);
+                addToSpill(i);
+            }
+        }
+    }
+
+    // Low-degree vertex can always be colored: just pick any of the color taken by any
+    // other adjacent verices.
+    // The &quot;Simplify&quot; phase takes a low-degree out of the interference graph to simplify it.
+    void simplify()
+    {
+        IndexType lastIndex = m_simplifyWorklist.takeLast();
+
+        ASSERT(!m_selectStack.contains(lastIndex));
+        ASSERT(!m_isOnSelectStack.get(lastIndex));
+        ASSERT(!m_spillWorklist.contains(lastIndex));
+
+        m_selectStack.append(lastIndex);
+        m_isOnSelectStack.quickSet(lastIndex);
+
+        if (traceDebug)
+            dataLogLn(&quot;Simplifying &quot;, lastIndex, &quot; by adding it to select stack&quot;);
+
+        forEachAdjacent(lastIndex, [this](IndexType adjacentTmpIndex) {
+            decrementDegreeInSimplification(adjacentTmpIndex);
+        });
+    }
+
+    void selectSpill()
+    {
+        IndexType victimIndex = Base::selectSpill();
+        m_spillWorklist.quickClear(victimIndex);
+        m_simplifyWorklist.append(victimIndex);
+    }
+
+    struct MoveSet {
+        unsigned addMove()
+        {
+            ASSERT(m_lowPriorityMoveList.isEmpty());
+
+            unsigned nextIndex = m_positionInMoveList++;
+            m_moveList.append(nextIndex);
+            return nextIndex;
+        }
+
+        void startAddingLowPriorityMoves()
+        {
+            ASSERT(m_lowPriorityMoveList.isEmpty());
+        }
+
+        unsigned addLowPriorityMove()
+        {
+            unsigned nextIndex = m_positionInMoveList++;
+            m_lowPriorityMoveList.append(nextIndex);
+
+            return nextIndex;
+        }
+
+        // We use references to moves here because the function
+        // may do coalescing, indicating it doesn't need to see
+        // the move again.
+        template &lt;typename Function&gt;
+        void forEachMove(Function function)
+        {
+            for (unsigned&amp; move : m_moveList) {
+                if (move != UINT_MAX)
+                    function(move);
+            }
+        }
+
+        template &lt;typename Function&gt;
+        void forEachLowPriorityMove(Function function)
+        {
+            for (unsigned&amp; move : m_lowPriorityMoveList) {
+                if (move != UINT_MAX)
+                    function(move);
+            }
+        }
+
+        void clear()
+        {
+            m_positionInMoveList = 0;
+            m_moveList.clear();
+            m_lowPriorityMoveList.clear();
+        }
+
+    private:
+        unsigned m_positionInMoveList;
+        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_moveList;
+        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_lowPriorityMoveList;
+    };
+
+    void decrementDegree(IndexType tmpIndex)
+    {
+        ASSERT(m_degrees[tmpIndex]);
+        --m_degrees[tmpIndex];
+    }
+
+    void decrementDegreeInSimplification(IndexType tmpIndex)
+    {
+        ASSERT(m_degrees[tmpIndex]);
+        unsigned oldDegree = m_degrees[tmpIndex]--;
+
+        if (oldDegree == registerCount()) {
+            ASSERT(m_degrees[tmpIndex] &lt; registerCount());
+            if (traceDebug)
+                dataLogLn(&quot;Moving tmp &quot;, tmpIndex, &quot; from spill list to simplify list because it's degree is now less than k&quot;);
+
+            if (!ASSERT_DISABLED)
+                ASSERT(m_unspillableTmps.contains(tmpIndex) || m_spillWorklist.contains(tmpIndex));
+            m_spillWorklist.quickClear(tmpIndex);
+
+            ASSERT(!m_simplifyWorklist.contains(tmpIndex));
+            m_simplifyWorklist.append(tmpIndex);
+        }
+    }
+
+    void assignColors()
+    {
+        m_worklistMoves.clear();
+        Base::assignColors();
+    }
+
+    // Set of &quot;move&quot; enabled for possible coalescing.
+    MoveSet m_worklistMoves;
+};
+
+template &lt;typename IndexType, typename TmpMapper&gt;
+class IRC : public AbstractColoringAllocator&lt;IndexType, TmpMapper&gt; {
+    using Base = AbstractColoringAllocator&lt;IndexType, TmpMapper&gt;;
+protected:
+    using Base::m_isOnSelectStack;
+    using Base::m_selectStack;
+    using Base::m_framePointerIndex;
+    using Base::m_interferesWithFramePointer;
+    using Base::m_simplifyWorklist;
+    using Base::m_spillWorklist;
+    using Base::m_hasSelectedSpill;
+    using Base::m_hasCoalescedNonTrivialMove;
+    using Base::m_degrees;
+    using Base::registerCount;
+    using Base::m_coalescedTmps;
+    using Base::m_moveList;
+    using Base::m_useCounts;
+    using Base::m_coalescedTmpsAtSpill;
+    using Base::m_spilledTmps;
+    using MoveOperands = typename Base::MoveOperands;
+    using Base::m_coalescingCandidates;
+    using Base::m_lastPrecoloredRegisterIndex;
+    using Base::m_coloredTmp;
+    using Base::m_code;
+    using InterferenceEdge = typename Base::InterferenceEdge;
+    using Base::m_unspillableTmps;
+    using Base::hasInterferenceEdge;
+    using Base::getAlias;
+    using Base::addEdge;
+    using Base::isPrecolored;
+    using Base::canBeSafelyCoalesced;
+    using Base::addEdgeDistinctWithoutDegreeChange;
+    using Base::forEachAdjacent;
+    using Base::hasBeenSimplified;
+    using Base::addToSpill;
+    using Base::m_interferenceEdges;
+    using Base::m_adjacencyList;
+    using Base::dumpInterferenceGraphInDot;
+
+public:
+    IRC(Code&amp; code, const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize, const HashSet&lt;unsigned&gt;&amp; unspillableTmps, const UseCounts&lt;Tmp&gt;&amp; useCounts)
+        : Base(code, regsInPriorityOrder, lastPrecoloredRegisterIndex, tmpArraySize, unspillableTmps, useCounts)
+    {
+    }
+
+    void allocate()
+    {
+        m_activeMoves.ensureSize(m_worklistMoves.totalNumberOfMoves());
+        ASSERT_WITH_MESSAGE(m_activeMoves.size() &gt;= m_coalescingCandidates.size(), &quot;The activeMove set should be big enough for the quick operations of BitVector.&quot;);
+
+        makeWorkList();
+
+        if (debug) {
+            dumpInterferenceGraphInDot(WTF::dataFile());
+            dataLog(&quot;Coalescing candidates:\n&quot;);
+            for (MoveOperands&amp; moveOp : m_coalescingCandidates) {
+                dataLog(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(moveOp.srcIndex),
+                    &quot; -&gt; &quot;, TmpMapper::tmpFromAbsoluteIndex(moveOp.dstIndex), &quot;\n&quot;);
+            }
+            dataLog(&quot;Initial work list\n&quot;);
+            dumpWorkLists(WTF::dataFile());
+        }
+
+        do {
+            if (traceDebug) {
+                dataLog(&quot;Before Graph simplification iteration\n&quot;);
+                dumpWorkLists(WTF::dataFile());
+            }
+
+            if (!m_simplifyWorklist.isEmpty())
+                simplify();
+            else if (!m_worklistMoves.isEmpty())
+                coalesce();
+            else if (!m_freezeWorklist.isEmpty())
+                freeze();
+            else if (m_spillWorklist.bitCount())
+                selectSpill();
+
+            if (traceDebug) {
+                dataLog(&quot;After Graph simplification iteration\n&quot;);
+                dumpWorkLists(WTF::dataFile());
+            }
+        } while (!m_simplifyWorklist.isEmpty() || !m_worklistMoves.isEmpty() || !m_freezeWorklist.isEmpty() || m_spillWorklist.bitCount());
+
+        assignColors();
+    }
+
+protected:
+
+    void makeWorkList()
+    {
+        IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
+            unsigned degree = m_degrees[i];
+            if (degree &gt;= registerCount())
+                addToSpill(i);
+            else if (!m_moveList[i].isEmpty())
+                m_freezeWorklist.add(i);
+            else
+                m_simplifyWorklist.append(i);
+        }
+    }
+
+    // Low-degree vertex can always be colored: just pick any of the color taken by any
+    // other adjacent verices.
+    // The &quot;Simplify&quot; phase takes a low-degree out of the interference graph to simplify it.
+    void simplify()
+    {
+        IndexType lastIndex = m_simplifyWorklist.takeLast();
+
+        ASSERT(!m_selectStack.contains(lastIndex));
+        ASSERT(!m_isOnSelectStack.get(lastIndex));
+        m_selectStack.append(lastIndex);
+        m_isOnSelectStack.quickSet(lastIndex);
+
+        forEachAdjacent(lastIndex, [this](IndexType adjacentTmpIndex) {
+            decrementDegree(adjacentTmpIndex);
+        });
+    }
+
+    void coalesce()
+    {
+        unsigned moveIndex = m_worklistMoves.takeLastMove();
+        const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
+        IndexType u = getAlias(moveOperands.srcIndex);
+        IndexType v = getAlias(moveOperands.dstIndex);
+
+        if (isPrecolored(v))
+            std::swap(u, v);
+
+        if (traceDebug)
+            dataLog(&quot;Coalescing move at index &quot;, moveIndex, &quot; u = &quot;, u, &quot; v = &quot;, v, &quot;\n&quot;);
+
+        if (u == v) {
+            addWorkList(u);
+
+            if (traceDebug)
+                dataLog(&quot;    Coalesced\n&quot;);
+        } else if (isPrecolored(v)
+            || hasInterferenceEdge(InterferenceEdge(u, v))
+            || (u == m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))) {
+            addWorkList(u);
+            addWorkList(v);
+
+            if (traceDebug)
+                dataLog(&quot;    Constrained\n&quot;);
+        } else if (canBeSafelyCoalesced(u, v)) {
+            combine(u, v);
+            addWorkList(u);
+            m_hasCoalescedNonTrivialMove = true;
+
+            if (traceDebug)
+                dataLog(&quot;    Safe Coalescing\n&quot;);
+        } else {
+            m_activeMoves.quickSet(moveIndex);
+            if (traceDebug)
+                dataLog(&quot;    Failed coalescing, added to active moves.\n&quot;);
+        }
+    }
+
+    void addWorkList(IndexType tmpIndex)
+    {
+        if (!isPrecolored(tmpIndex) &amp;&amp; m_degrees[tmpIndex] &lt; registerCount() &amp;&amp; !isMoveRelated(tmpIndex)) {
+            m_freezeWorklist.remove(tmpIndex);
+            m_simplifyWorklist.append(tmpIndex);
+        }
+    }
+
+    void combine(IndexType u, IndexType v)
+    {
+        if (!m_freezeWorklist.remove(v))
+            m_spillWorklist.quickClear(v);
+
+        ASSERT(!m_coalescedTmps[v]);
+        m_coalescedTmps[v] = u;
+
+        auto&amp; vMoves = m_moveList[v];
+        m_moveList[u].add(vMoves.begin(), vMoves.end());
+
+        forEachAdjacent(v, [this, u] (IndexType adjacentTmpIndex) {
+            if (addEdgeDistinctWithoutDegreeChange(adjacentTmpIndex, u)) {
+                // If we added a new edge between the adjacentTmp and u, it replaces the edge
+                // that existed with v.
+                // The degree of adjacentTmp remains the same since the edge just changed from u to v.
+                // All we need to do is update the degree of u.
+                if (!isPrecolored(u))
+                    m_degrees[u]++;
+            } else {
+                // If we already had an edge between the adjacentTmp and u, the degree of u
+                // is already correct. The degree of the adjacentTmp decreases since the edge
+                // with v is no longer relevant (we can think of it as merged with the edge with u).
+                decrementDegree(adjacentTmpIndex);
+            }
+        });
+
+        if (m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))
+            m_interferesWithFramePointer.quickSet(u);
+
+        if (m_degrees[u] &gt;= registerCount() &amp;&amp; m_freezeWorklist.remove(u))
+            addToSpill(u);
+    }
+
+    void freeze()
+    {
+        IndexType victimIndex = m_freezeWorklist.takeAny();
+        ASSERT_WITH_MESSAGE(getAlias(victimIndex) == victimIndex, &quot;coalesce() should not leave aliased Tmp in the worklist.&quot;);
+        m_simplifyWorklist.append(victimIndex);
+        freezeMoves(victimIndex);
+    }
+
+    void freezeMoves(IndexType tmpIndex)
+    {
+        forEachNodeMoves(tmpIndex, [this, tmpIndex] (IndexType moveIndex) {
+            if (!m_activeMoves.quickClear(moveIndex))
+                m_worklistMoves.takeMove(moveIndex);
+
+            const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
+            IndexType srcTmpIndex = moveOperands.srcIndex;
+            IndexType dstTmpIndex = moveOperands.dstIndex;
+
+            IndexType originalOtherTmp = srcTmpIndex != tmpIndex ? srcTmpIndex : dstTmpIndex;
+            IndexType otherTmpIndex = getAlias(originalOtherTmp);
+            if (m_degrees[otherTmpIndex] &lt; registerCount() &amp;&amp; !isMoveRelated(otherTmpIndex)) {
+                if (m_freezeWorklist.remove(otherTmpIndex))
+                    m_simplifyWorklist.append(otherTmpIndex);
+            }
+        });
+    }
+
+    void decrementDegree(IndexType tmpIndex)
+    {
+        ASSERT(m_degrees[tmpIndex]);
+
+        unsigned oldDegree = m_degrees[tmpIndex]--;
+        if (oldDegree == registerCount()) {
+            enableMovesOnValueAndAdjacents(tmpIndex);
+            m_spillWorklist.quickClear(tmpIndex);
+            if (isMoveRelated(tmpIndex))
+                m_freezeWorklist.add(tmpIndex);
+            else
+                m_simplifyWorklist.append(tmpIndex);
+        }
+    }
+
+    void selectSpill()
+    {
+        IndexType victimIndex = Base::selectSpill();
+        m_spillWorklist.quickClear(victimIndex);
+        m_simplifyWorklist.append(victimIndex);
+        freezeMoves(victimIndex);
+    }
+
+    void assignColors()
+    {
+        ASSERT(m_freezeWorklist.isEmpty());
+        m_worklistMoves.clear();
+        Base::assignColors();
+    }
+
+
+    bool isMoveRelated(IndexType tmpIndex)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
+                return true;
+        }
+        return false;
+    }
+
+    template&lt;typename Function&gt;
+    void forEachNodeMoves(IndexType tmpIndex, Function function)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
+                function(moveIndex);
+        }
+    }
+
+    void enableMovesOnValue(IndexType tmpIndex)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickClear(moveIndex))
+                m_worklistMoves.returnMove(moveIndex);
+        }
+    }
+
+    void enableMovesOnValueAndAdjacents(IndexType tmpIndex)
+    {
+        enableMovesOnValue(tmpIndex);
+
+        forEachAdjacent(tmpIndex, [this] (IndexType adjacentTmpIndex) {
+            enableMovesOnValue(adjacentTmpIndex);
+        });
+    }
+
+    struct OrderedMoveSet {
+        unsigned addMove()
+        {
+            ASSERT(m_lowPriorityMoveList.isEmpty());
+            ASSERT(!m_firstLowPriorityMoveIndex);
+
+            unsigned nextIndex = m_positionInMoveList.size();
+            unsigned position = m_moveList.size();
+            m_moveList.append(nextIndex);
+            m_positionInMoveList.append(position);
+            return nextIndex;
+        }
+
+        void startAddingLowPriorityMoves()
+        {
+            ASSERT(m_lowPriorityMoveList.isEmpty());
+            m_firstLowPriorityMoveIndex = m_moveList.size();
+        }
+
+        unsigned addLowPriorityMove()
+        {
+            ASSERT(m_firstLowPriorityMoveIndex == m_moveList.size());
+
+            unsigned nextIndex = m_positionInMoveList.size();
+            unsigned position = m_lowPriorityMoveList.size();
+            m_lowPriorityMoveList.append(nextIndex);
+            m_positionInMoveList.append(position);
+
+            ASSERT(nextIndex &gt;= m_firstLowPriorityMoveIndex);
+
+            return nextIndex;
+        }
+
+        bool isEmpty() const
+        {
+            return m_moveList.isEmpty() &amp;&amp; m_lowPriorityMoveList.isEmpty();
+        }
+
+        bool contains(unsigned index)
+        {
+            return m_positionInMoveList[index] != std::numeric_limits&lt;unsigned&gt;::max();
+        }
+
+        void takeMove(unsigned moveIndex)
+        {
+            unsigned positionInMoveList = m_positionInMoveList[moveIndex];
+            if (positionInMoveList == std::numeric_limits&lt;unsigned&gt;::max())
+                return;
+
+            if (moveIndex &lt; m_firstLowPriorityMoveIndex) {
+                ASSERT(m_moveList[positionInMoveList] == moveIndex);
+                unsigned lastIndex = m_moveList.last();
+                m_positionInMoveList[lastIndex] = positionInMoveList;
+                m_moveList[positionInMoveList] = lastIndex;
+                m_moveList.removeLast();
+            } else {
+                ASSERT(m_lowPriorityMoveList[positionInMoveList] == moveIndex);
+                unsigned lastIndex = m_lowPriorityMoveList.last();
+                m_positionInMoveList[lastIndex] = positionInMoveList;
+                m_lowPriorityMoveList[positionInMoveList] = lastIndex;
+                m_lowPriorityMoveList.removeLast();
+            }
+
+            m_positionInMoveList[moveIndex] = std::numeric_limits&lt;unsigned&gt;::max();
+
+            ASSERT(!contains(moveIndex));
+        }
+
+        unsigned takeLastMove()
+        {
+            ASSERT(!isEmpty());
+
+            unsigned lastIndex;
+            if (!m_moveList.isEmpty()) {
+                lastIndex = m_moveList.takeLast();
+                ASSERT(m_positionInMoveList[lastIndex] == m_moveList.size());
+            } else {
+                lastIndex = m_lowPriorityMoveList.takeLast();
+                ASSERT(m_positionInMoveList[lastIndex] == m_lowPriorityMoveList.size());
+            }
+            m_positionInMoveList[lastIndex] = std::numeric_limits&lt;unsigned&gt;::max();
+
+            ASSERT(!contains(lastIndex));
+            return lastIndex;
+        }
+
+        void returnMove(unsigned index)
+        {
+            // This assertion is a bit strict but that is how the move list should be used. The only kind of moves that can
+            // return to the list are the ones that we previously failed to coalesce with the conservative heuristics.
+            // Values should not be added back if they were never taken out when attempting coalescing.
+            ASSERT(!contains(index));
+
+            if (index &lt; m_firstLowPriorityMoveIndex) {
+                unsigned position = m_moveList.size();
+                m_moveList.append(index);
+                m_positionInMoveList[index] = position;
+            } else {
+                unsigned position = m_lowPriorityMoveList.size();
+                m_lowPriorityMoveList.append(index);
+                m_positionInMoveList[index] = position;
+            }
+
+            ASSERT(contains(index));
+        }
+
+        void clear()
+        {
+            m_positionInMoveList.clear();
+            m_moveList.clear();
+            m_lowPriorityMoveList.clear();
+        }
+
+        unsigned totalNumberOfMoves()
+        {
+            return m_moveList.size() + m_lowPriorityMoveList.size();
+        }
+
+    private:
+        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_positionInMoveList;
+        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_moveList;
+        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_lowPriorityMoveList;
+        unsigned m_firstLowPriorityMoveIndex { 0 };
+    };
+
+    void dumpWorkLists(PrintStream&amp; out)
+    {
+        out.print(&quot;Simplify work list:\n&quot;);
+        for (unsigned tmpIndex : m_simplifyWorklist)
+            out.print(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
+        out.printf(&quot;Moves work list is empty? %d\n&quot;, m_worklistMoves.isEmpty());
+        out.print(&quot;Freeze work list:\n&quot;);
+        for (unsigned tmpIndex : m_freezeWorklist)
+            out.print(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
+        out.print(&quot;Spill work list:\n&quot;);
+        for (unsigned tmpIndex : m_spillWorklist)
+            out.print(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
+    }
+
+    // Work lists.
+    // Low-degree, Move related.
+    HashSet&lt;IndexType&gt; m_freezeWorklist;
+    // Set of &quot;move&quot; enabled for possible coalescing.
+    OrderedMoveSet m_worklistMoves;
+    // Set of &quot;move&quot; not yet ready for coalescing.
+    BitVector m_activeMoves;
+};
+
+// This perform all the tasks that are specific to certain register type.
+template&lt;Arg::Type type, template&lt;typename, typename&gt; class AllocatorType&gt;
+class ColoringAllocator : public AllocatorType&lt;unsigned, AbsoluteTmpMapper&lt;type&gt;&gt; {
+    using TmpMapper = AbsoluteTmpMapper&lt;type&gt;;
+    using Base = AllocatorType&lt;unsigned, TmpMapper&gt;;
+    using Base::m_isOnSelectStack;
+    using Base::m_selectStack;
+    using Base::m_framePointerIndex;
+    using Base::m_interferesWithFramePointer;
+    using Base::m_simplifyWorklist;
+    using Base::m_spillWorklist;
+    using Base::m_hasSelectedSpill;
+    using Base::m_hasCoalescedNonTrivialMove;
+    using Base::m_degrees;
+    using Base::registerCount;
+    using Base::m_coalescedTmps;
+    using Base::m_moveList;
+    using Base::m_useCounts;
+    using Base::m_coalescedTmpsAtSpill;
+    using Base::m_spilledTmps;
+    using MoveOperands = typename Base::MoveOperands;
+    using Base::m_coalescingCandidates;
+    using Base::m_lastPrecoloredRegisterIndex;
+    using Base::m_coloredTmp;
+    using Base::m_code;
+    using InterferenceEdge = typename Base::InterferenceEdge;
+    using Base::m_worklistMoves;
+    using Base::hasInterferenceEdge;
+    using Base::getAlias;
+    using Base::addEdge;
+    using Base::m_interferenceEdges;
+
+public:
+
+    ColoringAllocator(Code&amp; code, TmpWidth&amp; tmpWidth, const UseCounts&lt;Tmp&gt;&amp; useCounts, const HashSet&lt;unsigned&gt;&amp; unspillableTmp)
+        : Base(code, code.regsInPriorityOrder(type), TmpMapper::lastMachineRegisterIndex(), tmpArraySize(code), unspillableTmp, useCounts)
+        , m_tmpWidth(tmpWidth)
+    {
+        if (type == Arg::GP) {
+            m_framePointerIndex = TmpMapper::absoluteIndex(Tmp(MacroAssembler::framePointerRegister));
+            m_interferesWithFramePointer.ensureSize(tmpArraySize(code));
+        }
+
+        initializePrecoloredTmp();
+        build();
+    }
+
+    Tmp getAlias(Tmp tmp) const
+    {
+        return TmpMapper::tmpFromAbsoluteIndex(getAlias(TmpMapper::absoluteIndex(tmp)));
+    }
+
+    // This tells you if a Move will be coalescable if the src and dst end up matching. This method
+    // relies on an analysis that is invalidated by register allocation, so you it's only meaningful to
+    // call this *before* replacing the Tmp's in this Inst with registers or spill slots.
+    bool mayBeCoalescable(const Inst&amp; inst) const
+    {
+        return mayBeCoalescableImpl(inst, &amp;m_tmpWidth);
+    }
+
+    bool isUselessMove(const Inst&amp; inst) const
+    {
+        return mayBeCoalescableImpl(inst, nullptr) &amp;&amp; inst.args[0].tmp() == inst.args[1].tmp();
+    }
+
+    Tmp getAliasWhenSpilling(Tmp tmp) const
+    {
+        ASSERT_WITH_MESSAGE(!m_spilledTmps.isEmpty(), &quot;This function is only valid for coalescing during spilling.&quot;);
+
+        if (m_coalescedTmpsAtSpill.isEmpty())
+            return tmp;
+
+        unsigned aliasIndex = TmpMapper::absoluteIndex(tmp);
+        while (unsigned nextAliasIndex = m_coalescedTmpsAtSpill[aliasIndex])
+            aliasIndex = nextAliasIndex;
+
+        Tmp alias = TmpMapper::tmpFromAbsoluteIndex(aliasIndex);
+
+        ASSERT_WITH_MESSAGE(!m_spilledTmps.contains(aliasIndex) || alias == tmp, &quot;The aliases at spill should always be colorable. Something went horribly wrong.&quot;);
+
+        return alias;
+    }
+
+    template&lt;typename IndexIterator&gt;
+    class IndexToTmpIteratorAdaptor {
+    public:
+        IndexToTmpIteratorAdaptor(IndexIterator&amp;&amp; indexIterator)
+            : m_indexIterator(WTFMove(indexIterator))
+        {
+        }
+
+        Tmp operator*() const { return TmpMapper::tmpFromAbsoluteIndex(*m_indexIterator); }
+        IndexToTmpIteratorAdaptor&amp; operator++() { ++m_indexIterator; return *this; }
+
+        bool operator==(const IndexToTmpIteratorAdaptor&amp; other) const
+        {
+            return m_indexIterator == other.m_indexIterator;
+        }
+
+        bool operator!=(const IndexToTmpIteratorAdaptor&amp; other) const
+        {
+            return !(*this == other);
+        }
+
+    private:
+        IndexIterator m_indexIterator;
+    };
+
+    template&lt;typename Collection&gt;
+    class IndexToTmpIterableAdaptor {
+    public:
+        IndexToTmpIterableAdaptor(const Collection&amp; collection)
+            : m_collection(collection)
+        {
+        }
+
+        IndexToTmpIteratorAdaptor&lt;typename Collection::const_iterator&gt; begin() const
+        {
+            return m_collection.begin();
+        }
+
+        IndexToTmpIteratorAdaptor&lt;typename Collection::const_iterator&gt; end() const
+        {
+            return m_collection.end();
+        }
+
+    private:
+        const Collection&amp; m_collection;
+    };
+
+    IndexToTmpIterableAdaptor&lt;Vector&lt;unsigned&gt;&gt; spilledTmps() const { return m_spilledTmps; }
+
+    bool requiresSpilling() const { return !m_spilledTmps.isEmpty(); }
+
+    Reg allocatedReg(Tmp tmp) const
+    {
+        ASSERT(!tmp.isReg());
+        ASSERT(m_coloredTmp.size());
+        ASSERT(tmp.isGP() == (type == Arg::GP));
+
+        Reg reg = m_coloredTmp[TmpMapper::absoluteIndex(tmp)];
+        if (!reg) {
+            dataLog(&quot;FATAL: No color for &quot;, tmp, &quot;\n&quot;);
+            dataLog(&quot;Code:\n&quot;);
+            dataLog(m_code);
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+        return reg;
+    }
+
+protected:
+    static unsigned tmpArraySize(Code&amp; code)
+    {
+        unsigned numTmps = code.numTmps(type);
+        return TmpMapper::absoluteIndex(numTmps);
+    }
+
+    void initializePrecoloredTmp()
+    {
+        m_coloredTmp.resize(m_lastPrecoloredRegisterIndex + 1);
+        for (unsigned i = 1; i &lt;= m_lastPrecoloredRegisterIndex; ++i) {
+            Tmp tmp = TmpMapper::tmpFromAbsoluteIndex(i);
+            ASSERT(tmp.isReg());
+            m_coloredTmp[i] = tmp.reg();
+        }
+    }
+
+    bool mayBeCoalesced(Arg left, Arg right)
+    {
+        if (!left.isTmp() || !right.isTmp())
+            return false;
+
+        Tmp leftTmp = left.tmp();
+        Tmp rightTmp = right.tmp();
+
+        if (leftTmp == rightTmp)
+            return false;
+
+        if (leftTmp.isGP() != (type == Arg::GP) || rightTmp.isGP() != (type == Arg::GP))
+            return false;
+
+        unsigned leftIndex = TmpMapper::absoluteIndex(leftTmp);
+        unsigned rightIndex = TmpMapper::absoluteIndex(rightTmp);
+
+        return !hasInterferenceEdge(InterferenceEdge(leftIndex, rightIndex));
+    }
+
+    void addToLowPriorityCoalescingCandidates(Arg left, Arg right)
+    {
+        ASSERT(mayBeCoalesced(left, right));
+        Tmp leftTmp = left.tmp();
+        Tmp rightTmp = right.tmp();
+
+        unsigned leftIndex = TmpMapper::absoluteIndex(leftTmp);
+        unsigned rightIndex = TmpMapper::absoluteIndex(rightTmp);
+
+        unsigned nextMoveIndex = m_coalescingCandidates.size();
+        m_coalescingCandidates.append({ leftIndex, rightIndex });
+
+        unsigned newIndexInWorklist = m_worklistMoves.addLowPriorityMove();
+        ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
+
+        m_moveList[leftIndex].add(nextMoveIndex);
+        m_moveList[rightIndex].add(nextMoveIndex);
+    }
+
+    void build()
+    {
+        m_coalescingCandidates.clear();
+        m_worklistMoves.clear();
+
+        TmpLiveness&lt;type&gt; liveness(m_code);
+        for (BasicBlock* block : m_code) {
+            typename TmpLiveness&lt;type&gt;::LocalCalc localCalc(liveness, block);
+            for (unsigned instIndex = block-&gt;size(); instIndex--;) {
+                Inst&amp; inst = block-&gt;at(instIndex);
+                Inst* nextInst = block-&gt;get(instIndex + 1);
+                build(&amp;inst, nextInst, localCalc);
+                localCalc.execute(instIndex);
+            }
+            build(nullptr, &amp;block-&gt;at(0), localCalc);
+        }
+        buildLowPriorityMoveList();
+    }
+
+    void build(Inst* prevInst, Inst* nextInst, const typename TmpLiveness&lt;type&gt;::LocalCalc&amp; localCalc)
+    {
+        if (traceDebug)
+            dataLog(&quot;Building between &quot;, pointerDump(prevInst), &quot; and &quot;, pointerDump(nextInst), &quot;:\n&quot;);
+
+        Inst::forEachDefWithExtraClobberedRegs&lt;Tmp&gt;(
+            prevInst, nextInst,
+            [&amp;] (const Tmp&amp; arg, Arg::Role, Arg::Type argType, Arg::Width) {
+                if (argType != type)
+                    return;
+                
+                // All the Def()s interfere with each other and with all the extra clobbered Tmps.
+                // We should not use forEachDefWithExtraClobberedRegs() here since colored Tmps
+                // do not need interference edges in our implementation.
+                Inst::forEachDef&lt;Tmp&gt;(
+                    prevInst, nextInst,
+                    [&amp;] (Tmp&amp; otherArg, Arg::Role, Arg::Type argType, Arg::Width) {
+                        if (argType != type)
+                            return;
+                        
+                        if (traceDebug)
+                            dataLog(&quot;    Adding def-def edge: &quot;, arg, &quot;, &quot;, otherArg, &quot;\n&quot;);
+                        this-&gt;addEdge(arg, otherArg);
+                    });
+            });
+
+        if (prevInst &amp;&amp; mayBeCoalescable(*prevInst)) {
+            // We do not want the Use() of this move to interfere with the Def(), even if it is live
+            // after the Move. If we were to add the interference edge, it would be impossible to
+            // coalesce the Move even if the two Tmp never interfere anywhere.
+            Tmp defTmp;
+            Tmp useTmp;
+            prevInst-&gt;forEachTmp([&amp;defTmp, &amp;useTmp] (Tmp&amp; argTmp, Arg::Role role, Arg::Type, Arg::Width) {
+                if (Arg::isLateDef(role))
+                    defTmp = argTmp;
+                else {
+                    ASSERT(Arg::isEarlyUse(role));
+                    useTmp = argTmp;
+                }
+            });
+            ASSERT(defTmp);
+            ASSERT(useTmp);
+
+            unsigned nextMoveIndex = m_coalescingCandidates.size();
+            m_coalescingCandidates.append({ TmpMapper::absoluteIndex(useTmp), TmpMapper::absoluteIndex(defTmp) });
+            if (traceDebug)
+                dataLogLn(&quot;Move at index &quot;, nextMoveIndex, &quot; is: &quot;, *prevInst);
+
+            unsigned newIndexInWorklist = m_worklistMoves.addMove();
+            ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
+
+            for (const Arg&amp; arg : prevInst-&gt;args) {
+                auto&amp; list = m_moveList[TmpMapper::absoluteIndex(arg.tmp())];
+                list.add(nextMoveIndex);
+            }
+
+            for (const Tmp&amp; liveTmp : localCalc.live()) {
+                if (liveTmp != useTmp) {
+                    if (traceDebug)
+                        dataLog(&quot;    Adding def-live for coalescable: &quot;, defTmp, &quot;, &quot;, liveTmp, &quot;\n&quot;);
+                    addEdge(defTmp, liveTmp);
+                }
+            }
+
+            // The next instruction could have early clobbers or early def's. We need to consider
+            // those now.
+            addEdges(nullptr, nextInst, localCalc.live());
+        } else
+            addEdges(prevInst, nextInst, localCalc.live());
+    }
+
+    void buildLowPriorityMoveList()
+    {
+        if (!isX86())
+            return;
+
+        m_worklistMoves.startAddingLowPriorityMoves();
+        for (BasicBlock* block : m_code) {
+            for (Inst&amp; inst : *block) {
+                if (std::optional&lt;unsigned&gt; defArgIndex = inst.shouldTryAliasingDef()) {
+                    Arg op1 = inst.args[*defArgIndex - 2];
+                    Arg op2 = inst.args[*defArgIndex - 1];
+                    Arg dest = inst.args[*defArgIndex];
+
+                    if (op1 == dest || op2 == dest)
+                        continue;
+
+                    if (mayBeCoalesced(op1, dest))
+                        addToLowPriorityCoalescingCandidates(op1, dest);
+                    if (op1 != op2 &amp;&amp; mayBeCoalesced(op2, dest))
+                        addToLowPriorityCoalescingCandidates(op2, dest);
+                }
+            }
+        }
+    }
+
+    void addEdges(Inst* prevInst, Inst* nextInst, typename TmpLiveness&lt;type&gt;::LocalCalc::Iterable liveTmps)
+    {
+        // All the Def()s interfere with everthing live.
+        Inst::forEachDefWithExtraClobberedRegs&lt;Tmp&gt;(
+            prevInst, nextInst,
+            [&amp;] (const Tmp&amp; arg, Arg::Role, Arg::Type argType, Arg::Width) {
+                if (argType != type)
+                    return;
+                
+                for (const Tmp&amp; liveTmp : liveTmps) {
+                    ASSERT(liveTmp.isGP() == (type == Arg::GP));
+                    
+                    if (traceDebug)
+                        dataLog(&quot;    Adding def-live edge: &quot;, arg, &quot;, &quot;, liveTmp, &quot;\n&quot;);
+                    
+                    addEdge(arg, liveTmp);
+                }
+
+                if (type == Arg::GP &amp;&amp; !arg.isGPR())
+                    m_interferesWithFramePointer.quickSet(TmpMapper::absoluteIndex(arg));
+            });
+    }
+
+    void addEdge(Tmp a, Tmp b)
+    {
+        ASSERT_WITH_MESSAGE(a.isGP() == b.isGP(), &quot;An interference between registers of different types does not make sense, it can lead to non-colorable graphs.&quot;);
+
+        addEdge(TmpMapper::absoluteIndex(a), TmpMapper::absoluteIndex(b));
+    }
+
+    // Calling this without a tmpWidth will perform a more conservative coalescing analysis that assumes
+    // that Move32's are not coalescable.
+    static bool mayBeCoalescableImpl(const Inst&amp; inst, TmpWidth* tmpWidth)
+    {
+        switch (type) {
+        case Arg::GP:
+            switch (inst.kind.opcode) {
+            case Move:
+            case Move32:
+                break;
+            default:
+                return false;
+            }
+            break;
+        case Arg::FP:
+            switch (inst.kind.opcode) {
+            case MoveFloat:
+            case MoveDouble:
+                break;
+            default:
+                return false;
+            }
+            break;
+        }
+
+        ASSERT_WITH_MESSAGE(inst.args.size() == 2, &quot;We assume coalecable moves only have two arguments in a few places.&quot;);
+
+        if (!inst.args[0].isTmp() || !inst.args[1].isTmp())
+            return false;
+
+        ASSERT(inst.args[0].type() == type);
+        ASSERT(inst.args[1].type() == type);
+
+        // We can coalesce a Move32 so long as either of the following holds:
+        // - The input is already zero-filled.
+        // - The output only cares about the low 32 bits.
+        //
+        // Note that the input property requires an analysis over ZDef's, so it's only valid so long
+        // as the input gets a register. We don't know if the input gets a register, but we do know
+        // that if it doesn't get a register then we will still emit this Move32.
+        if (inst.kind.opcode == Move32) {
+            if (!tmpWidth)
+                return false;
+
+            if (tmpWidth-&gt;defWidth(inst.args[0].tmp()) &gt; Arg::Width32
+                &amp;&amp; tmpWidth-&gt;useWidth(inst.args[1].tmp()) &gt; Arg::Width32)
+                return false;
+        }
+        
+        return true;
+    }
+
+    TmpWidth&amp; m_tmpWidth;
+};
+
+class GraphColoringRegisterAllocation {
+public:
+    GraphColoringRegisterAllocation(Code&amp; code, UseCounts&lt;Tmp&gt;&amp; useCounts)
+        : m_code(code)
+        , m_useCounts(useCounts)
+    {
+    }
+
+    void run()
+    {
+        padInterference(m_code);
+
+        allocateOnType&lt;Arg::GP&gt;();
+        m_numIterations = 0;
+        allocateOnType&lt;Arg::FP&gt;();
+
+        fixSpillsAfterTerminals();
+
+        if (reportStats)
+            dataLog(&quot;Num iterations = &quot;, m_numIterations, &quot;\n&quot;);
+    }
+
+private:
+    template&lt;Arg::Type type&gt;
+    void allocateOnType()
+    {
+        HashSet&lt;unsigned&gt; unspillableTmps = computeUnspillableTmps&lt;type&gt;();
+
+        // FIXME: If a Tmp is used only from a Scratch role and that argument is !admitsStack, then
+        // we should add the Tmp to unspillableTmps. That will help avoid relooping only to turn the
+        // Tmp into an unspillable Tmp.
+        // https://bugs.webkit.org/show_bug.cgi?id=152699
+        
+        while (true) {
+            ++m_numIterations;
+
+            if (traceDebug)
+                dataLog(&quot;Code at iteration &quot;, m_numIterations, &quot;:\n&quot;, m_code);
+
+            // FIXME: One way to optimize this code is to remove the recomputation inside the fixpoint.
+            // We need to recompute because spilling adds tmps, but we could just update tmpWidth when we
+            // add those tmps. Note that one easy way to remove the recomputation is to make any newly
+            // added Tmps get the same use/def widths that the original Tmp got. But, this may hurt the
+            // spill code we emit. Since we currently recompute TmpWidth after spilling, the newly
+            // created Tmps may get narrower use/def widths. On the other hand, the spiller already
+            // selects which move instruction to use based on the original Tmp's widths, so it may not
+            // matter than a subsequent iteration sees a coservative width for the new Tmps. Also, the
+            // recomputation may not actually be a performance problem; it's likely that a better way to
+            // improve performance of TmpWidth is to replace its HashMap with something else. It's
+            // possible that most of the TmpWidth overhead is from queries of TmpWidth rather than the
+            // recomputation, in which case speeding up the lookup would be a bigger win.
+            // https://bugs.webkit.org/show_bug.cgi?id=152478
+            m_tmpWidth.recompute(m_code);
+
+            auto doAllocation = [&amp;] (auto&amp; allocator) -&gt; bool {
+                allocator.allocate();
+                if (!allocator.requiresSpilling()) {
+                    this-&gt;assignRegistersToTmp&lt;type&gt;(allocator);
+                    if (traceDebug)
+                        dataLog(&quot;Successfull allocation at iteration &quot;, m_numIterations, &quot;:\n&quot;, m_code);
+
+                    return true;
+                }
+
+                this-&gt;addSpillAndFill&lt;type&gt;(allocator, unspillableTmps);
+                return false;
+            };
+            
+            bool done;
+            if ((isARM64() || Options::airForceBriggsAllocator()) &amp;&amp; !Options::airForceIRCAllocator()) {
+                ColoringAllocator&lt;type, Briggs&gt; allocator(m_code, m_tmpWidth, m_useCounts, unspillableTmps);
+                done = doAllocation(allocator);
+            } else {
+                ColoringAllocator&lt;type, IRC&gt; allocator(m_code, m_tmpWidth, m_useCounts, unspillableTmps);
+                done = doAllocation(allocator);
+            }
+            if (done)
+                return;
+        }
+    }
+
+    template&lt;Arg::Type type&gt;
+    HashSet&lt;unsigned&gt; computeUnspillableTmps()
+    {
+
+        HashSet&lt;unsigned&gt; unspillableTmps;
+
+        struct Range {
+            unsigned first { std::numeric_limits&lt;unsigned&gt;::max() };
+            unsigned last { 0 };
+            unsigned count { 0 };
+            unsigned admitStackCount { 0 };
+        };
+
+        unsigned numTmps = m_code.numTmps(type);
+        unsigned arraySize = AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(numTmps);
+
+        Vector&lt;Range, 0, UnsafeVectorOverflow&gt; ranges;
+        ranges.fill(Range(), arraySize);
+
+        unsigned globalIndex = 0;
+        for (BasicBlock* block : m_code) {
+            for (Inst&amp; inst : *block) {
+                inst.forEachArg([&amp;] (Arg&amp; arg, Arg::Role, Arg::Type argType, Arg::Width) {
+                    if (arg.isTmp() &amp;&amp; inst.admitsStack(arg)) {
+                        if (argType != type)
+                            return;
+
+                        Tmp tmp = arg.tmp();
+                        Range&amp; range = ranges[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
+                        range.count++;
+                        range.admitStackCount++;
+                        if (globalIndex &lt; range.first) {
+                            range.first = globalIndex;
+                            range.last = globalIndex;
+                        } else
+                            range.last = globalIndex;
+
+                        return;
+                    }
+
+                    arg.forEachTmpFast([&amp;] (Tmp&amp; tmp) {
+                        if (tmp.isGP() != (type == Arg::GP))
+                            return;
+
+                        Range&amp; range = ranges[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
+                        range.count++;
+                        if (globalIndex &lt; range.first) {
+                            range.first = globalIndex;
+                            range.last = globalIndex;
+                        } else
+                            range.last = globalIndex;
+                    });
+                });
+
+                ++globalIndex;
+            }
+            ++globalIndex;
+        }
+        for (unsigned i = AbsoluteTmpMapper&lt;type&gt;::lastMachineRegisterIndex() + 1; i &lt; ranges.size(); ++i) {
+            Range&amp; range = ranges[i];
+            if (range.last - range.first &lt;= 1 &amp;&amp; range.count &gt; range.admitStackCount)
+                unspillableTmps.add(i);
+        }
+
+        return unspillableTmps;
+    }
+
+    template&lt;Arg::Type type, typename AllocatorType&gt;
+    void assignRegistersToTmp(const AllocatorType&amp; allocator)
+    {
+        for (BasicBlock* block : m_code) {
+            // Give Tmp a valid register.
+            for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
+                Inst&amp; inst = block-&gt;at(instIndex);
+
+                // The mayBeCoalescable() method will change its mind for some operations after we
+                // complete register allocation. So, we record this before starting.
+                bool mayBeCoalescable = allocator.mayBeCoalescable(inst);
+
+                // Move32 is cheaper if we know that it's equivalent to a Move. It's
+                // equivalent if the destination's high bits are not observable or if the source's high
+                // bits are all zero. Note that we don't have the opposite optimization for other
+                // architectures, which may prefer Move over Move32, because Move is canonical already.
+                if (type == Arg::GP &amp;&amp; inst.kind.opcode == Move
+                    &amp;&amp; inst.args[0].isTmp() &amp;&amp; inst.args[1].isTmp()) {
+                    if (m_tmpWidth.useWidth(inst.args[1].tmp()) &lt;= Arg::Width32
+                        || m_tmpWidth.defWidth(inst.args[0].tmp()) &lt;= Arg::Width32)
+                        inst.kind.opcode = Move32;
+                }
+
+                inst.forEachTmpFast([&amp;] (Tmp&amp; tmp) {
+                    if (tmp.isReg() || tmp.isGP() == (type != Arg::GP))
+                        return;
+
+                    Tmp aliasTmp = allocator.getAlias(tmp);
+                    Tmp assignedTmp;
+                    if (aliasTmp.isReg())
+                        assignedTmp = Tmp(aliasTmp.reg());
+                    else {
+                        auto reg = allocator.allocatedReg(aliasTmp);
+                        ASSERT(reg);
+                        assignedTmp = Tmp(reg);
+                    }
+                    ASSERT(assignedTmp.isReg());
+                    tmp = assignedTmp;
+                });
+
+                if (mayBeCoalescable &amp;&amp; inst.args[0].isTmp() &amp;&amp; inst.args[1].isTmp()
+                    &amp;&amp; inst.args[0].tmp() == inst.args[1].tmp())
+                    inst = Inst();
+            }
+
+            // Remove all the useless moves we created in this block.
+            block-&gt;insts().removeAllMatching([&amp;] (const Inst&amp; inst) {
+                return !inst;
+            });
+        }
+    }
+
+    static unsigned stackSlotMinimumWidth(Arg::Width width)
+    {
+        return width &lt;= Arg::Width32 ? 4 : 8;
+    }
+
+    template&lt;Arg::Type type, typename AllocatorType&gt;
+    void addSpillAndFill(const AllocatorType&amp; allocator, HashSet&lt;unsigned&gt;&amp; unspillableTmps)
+    {
+        HashMap&lt;Tmp, StackSlot*&gt; stackSlots;
+        for (Tmp tmp : allocator.spilledTmps()) {
+            // All the spilled values become unspillable.
+            unspillableTmps.add(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp));
+
+            // Allocate stack slot for each spilled value.
+            StackSlot* stackSlot = m_code.addStackSlot(
+                stackSlotMinimumWidth(m_tmpWidth.requiredWidth(tmp)), StackSlotKind::Spill);
+            bool isNewTmp = stackSlots.add(tmp, stackSlot).isNewEntry;
+            ASSERT_UNUSED(isNewTmp, isNewTmp);
+        }
+
+        // Rewrite the program to get rid of the spilled Tmp.
+        InsertionSet insertionSet(m_code);
+        for (BasicBlock* block : m_code) {
+            bool hasAliasedTmps = false;
+
+            for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
+                Inst&amp; inst = block-&gt;at(instIndex);
+
+                // The TmpWidth analysis will say that a Move only stores 32 bits into the destination,
+                // if the source only had 32 bits worth of non-zero bits. Same for the source: it will
+                // only claim to read 32 bits from the source if only 32 bits of the destination are
+                // read. Note that we only apply this logic if this turns into a load or store, since
+                // Move is the canonical way to move data between GPRs.
+                bool canUseMove32IfDidSpill = false;
+                bool didSpill = false;
+                if (type == Arg::GP &amp;&amp; inst.kind.opcode == Move) {
+                    if ((inst.args[0].isTmp() &amp;&amp; m_tmpWidth.width(inst.args[0].tmp()) &lt;= Arg::Width32)
+                        || (inst.args[1].isTmp() &amp;&amp; m_tmpWidth.width(inst.args[1].tmp()) &lt;= Arg::Width32))
+                        canUseMove32IfDidSpill = true;
+                }
+
+                // Try to replace the register use by memory use when possible.
+                inst.forEachArg(
+                    [&amp;] (Arg&amp; arg, Arg::Role role, Arg::Type argType, Arg::Width width) {
+                        if (!arg.isTmp())
+                            return;
+                        if (argType != type)
+                            return;
+                        if (arg.isReg())
+                            return;
+                        
+                        auto stackSlotEntry = stackSlots.find(arg.tmp());
+                        if (stackSlotEntry == stackSlots.end())
+                            return;
+                        if (!inst.admitsStack(arg))
+                            return;
+                        
+                        // If the Tmp holds a constant then we want to rematerialize its
+                        // value rather than loading it from the stack. In order for that
+                        // optimization to kick in, we need to avoid placing the Tmp's stack
+                        // address into the instruction.
+                        if (!Arg::isColdUse(role)) {
+                            const UseCounts&lt;Tmp&gt;::Counts* counts = m_useCounts[arg.tmp()];
+                            if (counts &amp;&amp; counts-&gt;numConstDefs == 1 &amp;&amp; counts-&gt;numDefs == 1)
+                                return;
+                        }
+                        
+                        Arg::Width spillWidth = m_tmpWidth.requiredWidth(arg.tmp());
+                        if (Arg::isAnyDef(role) &amp;&amp; width &lt; spillWidth)
+                            return;
+                        ASSERT(inst.kind.opcode == Move || !(Arg::isAnyUse(role) &amp;&amp; width &gt; spillWidth));
+                        
+                        if (spillWidth != Arg::Width32)
+                            canUseMove32IfDidSpill = false;
+                        
+                        stackSlotEntry-&gt;value-&gt;ensureSize(
+                            canUseMove32IfDidSpill ? 4 : Arg::bytes(width));
+                        arg = Arg::stack(stackSlotEntry-&gt;value);
+                        didSpill = true;
+                    });
+
+                if (didSpill &amp;&amp; canUseMove32IfDidSpill)
+                    inst.kind.opcode = Move32;
+
+                // For every other case, add Load/Store as needed.
+                inst.forEachTmp([&amp;] (Tmp&amp; tmp, Arg::Role role, Arg::Type argType, Arg::Width) {
+                    if (tmp.isReg() || argType != type)
+                        return;
+
+                    auto stackSlotEntry = stackSlots.find(tmp);
+                    if (stackSlotEntry == stackSlots.end()) {
+                        Tmp alias = allocator.getAliasWhenSpilling(tmp);
+                        if (alias != tmp) {
+                            tmp = alias;
+                            hasAliasedTmps = true;
+                        }
+                        return;
+                    }
+
+                    Arg::Width spillWidth = m_tmpWidth.requiredWidth(tmp);
+                    Opcode move = Oops;
+                    switch (stackSlotMinimumWidth(spillWidth)) {
+                    case 4:
+                        move = type == Arg::GP ? Move32 : MoveFloat;
+                        break;
+                    case 8:
+                        move = type == Arg::GP ? Move : MoveDouble;
+                        break;
+                    default:
+                        RELEASE_ASSERT_NOT_REACHED();
+                        break;
+                    }
+
+                    tmp = m_code.newTmp(type);
+                    unspillableTmps.add(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp));
+
+                    Arg arg = Arg::stack(stackSlotEntry-&gt;value);
+                    if (Arg::isAnyUse(role) &amp;&amp; role != Arg::Scratch)
+                        insertionSet.insert(instIndex, move, inst.origin, arg, tmp);
+                    if (Arg::isAnyDef(role))
+                        insertionSet.insert(instIndex + 1, move, inst.origin, tmp, arg);
+                });
+            }
+            insertionSet.execute(block);
+
+            if (hasAliasedTmps) {
+                block-&gt;insts().removeAllMatching([&amp;] (const Inst&amp; inst) {
+                    return allocator.isUselessMove(inst);
+                });
+            }
+        }
+    }
+
+    void fixSpillsAfterTerminals()
+    {
+        // Because there may be terminals that produce values, IRC may
+        // want to spill those terminals. It'll happen to spill it after
+        // the terminal. If we left the graph in this state, it'd be invalid
+        // because a terminal must be the last instruction in a block.
+        // We fix that here.
+
+        InsertionSet insertionSet(m_code);
+
+        bool addedBlocks = false;
+
+        for (BasicBlock* block : m_code) {
+            unsigned terminalIndex = block-&gt;size();
+            bool foundTerminal = false;
+            while (terminalIndex--) {
+                if (block-&gt;at(terminalIndex).isTerminal()) {
+                    foundTerminal = true;
+                    break;
+                }
+            }
+            ASSERT_UNUSED(foundTerminal, foundTerminal);
+
+            if (terminalIndex == block-&gt;size() - 1)
+                continue;
+
+            // There must be instructions after the terminal because it's not the last instruction.
+            ASSERT(terminalIndex &lt; block-&gt;size() - 1);
+            Vector&lt;Inst, 1&gt; instsToMove;
+            for (unsigned i = terminalIndex + 1; i &lt; block-&gt;size(); i++)
+                instsToMove.append(block-&gt;at(i));
+            RELEASE_ASSERT(instsToMove.size());
+
+            for (FrequentedBlock&amp; frequentedSuccessor : block-&gt;successors()) {
+                BasicBlock* successor = frequentedSuccessor.block();
+                // If successor's only predecessor is block, we can plant the spill inside
+                // the successor. Otherwise, we must split the critical edge and create
+                // a new block for the spill.
+                if (successor-&gt;numPredecessors() == 1) {
+                    insertionSet.insertInsts(0, instsToMove);
+                    insertionSet.execute(successor);
+                } else {
+                    addedBlocks = true;
+                    // FIXME: We probably want better block ordering here.
+                    BasicBlock* newBlock = m_code.addBlock();
+                    for (const Inst&amp; inst : instsToMove)
+                        newBlock-&gt;appendInst(inst);
+                    newBlock-&gt;appendInst(Inst(Jump, instsToMove.last().origin));
+                    newBlock-&gt;successors().append(successor);
+                    frequentedSuccessor.block() = newBlock;
+                }
+            }
+
+            block-&gt;resize(terminalIndex + 1);
+        }
+
+        if (addedBlocks)
+            m_code.resetReachability();
+    }
+
+    Code&amp; m_code;
+    TmpWidth m_tmpWidth;
+    UseCounts&lt;Tmp&gt;&amp; m_useCounts;
+    unsigned m_numIterations { 0 };
+};
+
+} // anonymous namespace
+
+void allocateRegistersByGraphColoring(Code&amp; code)
+{
+    PhaseScope phaseScope(code, &quot;Air::allocateRegistersByGraphColoring&quot;);
+
+    UseCounts&lt;Tmp&gt; useCounts(code);
+    GraphColoringRegisterAllocation graphColoringRegisterAllocation(code, useCounts);
+    graphColoringRegisterAllocation.run();
+}
+
+} } } // namespace JSC::B3::Air
+
+#endif // ENABLE(B3_JIT)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirAllocateRegistersByGraphColoringhfromrev212812trunkSourceJavaScriptCoreb3airAirGraphColoringh"></a>
<div class="copfile"><h4>Copied: trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.h (from rev 212812, trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.h) (0 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.h                                (rev 0)
+++ trunk/Source/JavaScriptCore/b3/air/AirAllocateRegistersByGraphColoring.h        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -0,0 +1,47 @@
</span><ins>+/*
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#if ENABLE(B3_JIT)
+
+namespace JSC { namespace B3 { namespace Air {
+
+class Code;
+
+// We have two register allocators, both fundamentally derived from Chaitin's Yorktown
+// allocator:
+// http://cs.gmu.edu/~white/CS640/p98-chaitin.pdf
+//
+// We have an implementation of Briggs's optimistic allocator which is derivative of Chaitin's allocator:
+// http://www.cs.utexas.edu/users/mckinley/380C/lecs/briggs-thesis-1992.pdf
+//
+// And an implementation of Andrew Appel's Iterated Register Coalescing which is derivative of Briggs's allocator.
+// http://www.cs.cmu.edu/afs/cs/academic/class/15745-s07/www/papers/george.pdf
+void allocateRegistersByGraphColoring(Code&amp;);
+
+} } } // namespace JSC::B3::Air
+
+#endif // ENABLE(B3_JIT)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirGeneratecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/b3/air/AirGenerate.cpp (212812 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirGenerate.cpp        2017-02-22 07:59:59 UTC (rev 212812)
+++ trunk/Source/JavaScriptCore/b3/air/AirGenerate.cpp        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -28,6 +28,7 @@
</span><span class="cx"> 
</span><span class="cx"> #if ENABLE(B3_JIT)
</span><span class="cx"> 
</span><ins>+#include &quot;AirAllocateRegistersByGraphColoring.h&quot;
</ins><span class="cx"> #include &quot;AirAllocateStack.h&quot;
</span><span class="cx"> #include &quot;AirCode.h&quot;
</span><span class="cx"> #include &quot;AirDumpAsJS.h&quot;
</span><span class="lines">@@ -35,7 +36,6 @@
</span><span class="cx"> #include &quot;AirFixObviousSpills.h&quot;
</span><span class="cx"> #include &quot;AirFixPartialRegisterStalls.h&quot;
</span><span class="cx"> #include &quot;AirGenerationContext.h&quot;
</span><del>-#include &quot;AirGraphColoring.h&quot;
</del><span class="cx"> #include &quot;AirHandleCalleeSaves.h&quot;
</span><span class="cx"> #include &quot;AirLogRegisterPressure.h&quot;
</span><span class="cx"> #include &quot;AirLowerAfterRegAlloc.h&quot;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirGraphColoringcpp"></a>
<div class="delfile"><h4>Deleted: trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.cpp (212812 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.cpp        2017-02-22 07:59:59 UTC (rev 212812)
+++ trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.cpp        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -1,2133 +0,0 @@
</span><del>-/*
- * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include &quot;config.h&quot;
-#include &quot;AirGraphColoring.h&quot;
-
-#if ENABLE(B3_JIT)
-
-#include &quot;AirCode.h&quot;
-#include &quot;AirInsertionSet.h&quot;
-#include &quot;AirInstInlines.h&quot;
-#include &quot;AirLiveness.h&quot;
-#include &quot;AirPadInterference.h&quot;
-#include &quot;AirPhaseScope.h&quot;
-#include &quot;AirTmpInlines.h&quot;
-#include &quot;AirTmpWidth.h&quot;
-#include &quot;AirUseCounts.h&quot;
-#include &lt;wtf/ListDump.h&gt;
-
-namespace JSC { namespace B3 { namespace Air {
-
-namespace {
-
-bool debug = false;
-bool traceDebug = false;
-bool reportStats = false;
-
-// The AbstractColoringAllocator defines all the code that is independant
-// from the type or register and can be shared when allocating registers.
-template&lt;typename IndexType, typename TmpMapper&gt;
-class AbstractColoringAllocator {
-public:
-    AbstractColoringAllocator(Code&amp; code, const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize, const HashSet&lt;unsigned&gt;&amp; unspillableTmps, const UseCounts&lt;Tmp&gt;&amp; useCounts)
-        : m_regsInPriorityOrder(regsInPriorityOrder)
-        , m_lastPrecoloredRegisterIndex(lastPrecoloredRegisterIndex)
-        , m_unspillableTmps(unspillableTmps)
-        , m_useCounts(useCounts)
-        , m_code(code)
-    {
-        for (Reg reg : m_regsInPriorityOrder)
-            m_mutableRegs.set(reg);
-        
-        initializeDegrees(tmpArraySize);
-        
-        m_adjacencyList.resize(tmpArraySize);
-        m_moveList.resize(tmpArraySize);
-        m_coalescedTmps.fill(0, tmpArraySize);
-        m_isOnSelectStack.ensureSize(tmpArraySize);
-        m_spillWorklist.ensureSize(tmpArraySize);
-    }
-
-protected:
-
-    unsigned registerCount() const { return m_regsInPriorityOrder.size(); }
-
-    IndexType getAlias(IndexType tmpIndex) const
-    {
-        IndexType alias = tmpIndex;
-        while (IndexType nextAlias = m_coalescedTmps[alias])
-            alias = nextAlias;
-        return alias;
-    }
-
-    void addEdge(IndexType a, IndexType b)
-    {
-        if (a == b)
-            return;
-        addEdgeDistinct(a, b);
-    }
-
-    void addToSpill(unsigned toSpill)
-    {
-        if (m_unspillableTmps.contains(toSpill))
-            return;
-
-        m_spillWorklist.add(toSpill);
-    }
-
-    bool isPrecolored(IndexType tmpIndex)
-    {
-        return tmpIndex &lt;= m_lastPrecoloredRegisterIndex;
-    }
-
-    void initializeDegrees(unsigned tmpArraySize)
-    {
-        m_degrees.resize(tmpArraySize);
-
-        // All precolored registers have  an &quot;infinite&quot; degree.
-        unsigned firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
-        for (unsigned i = 0; i &lt; firstNonRegIndex; ++i)
-            m_degrees[i] = std::numeric_limits&lt;unsigned&gt;::max();
-
-        memset(m_degrees.data() + firstNonRegIndex, 0, (tmpArraySize - firstNonRegIndex) * sizeof(unsigned));
-    }
-
-    void addEdgeDistinct(IndexType a, IndexType b)
-    {
-        ASSERT(a != b);
-        bool isNewEdge = addInterferenceEdge(InterferenceEdge(a, b));
-        if (isNewEdge) {
-            if (!isPrecolored(a)) {
-                ASSERT(!m_adjacencyList[a].contains(b));
-                m_adjacencyList[a].append(b);
-                m_degrees[a]++;
-            }
-
-            if (!isPrecolored(b)) {
-                ASSERT(!m_adjacencyList[b].contains(a));
-                m_adjacencyList[b].append(a);
-                m_degrees[b]++;
-            }
-        }
-    }
-
-    bool addEdgeDistinctWithoutDegreeChange(IndexType a, IndexType b)
-    {
-        ASSERT(a != b);
-        bool isNewEdge = addInterferenceEdge(InterferenceEdge(a, b));
-        if (isNewEdge) {
-            if (!isPrecolored(a)) {
-                ASSERT(!m_adjacencyList[a].contains(b));
-                m_adjacencyList[a].append(b);
-            }
-
-            if (!isPrecolored(b)) {
-                ASSERT(!m_adjacencyList[b].contains(a));
-                m_adjacencyList[b].append(a);
-            }
-            return true;
-        }
-        return false;
-    }
-
-    template&lt;typename Function&gt;
-    void forEachAdjacent(IndexType tmpIndex, Function function)
-    {
-        for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
-            if (!hasBeenSimplified(adjacentTmpIndex))
-                function(adjacentTmpIndex);
-        }
-    }
-
-    bool hasBeenSimplified(IndexType tmpIndex)
-    {
-        if (!ASSERT_DISABLED) {
-            if (!!m_coalescedTmps[tmpIndex])
-                ASSERT(getAlias(tmpIndex) != tmpIndex);
-        }
-
-        return m_isOnSelectStack.quickGet(tmpIndex) || !!m_coalescedTmps[tmpIndex];
-    }
-
-    bool canBeSafelyCoalesced(IndexType u, IndexType v)
-    {
-        ASSERT(!isPrecolored(v));
-        if (isPrecolored(u))
-            return precoloredCoalescingHeuristic(u, v);
-        return conservativeHeuristic(u, v);
-    }
-
-    bool conservativeHeuristic(IndexType u, IndexType v)
-    {
-        // This is using the Briggs' conservative coalescing rule:
-        // If the number of combined adjacent node with a degree &gt;= K is less than K,
-        // it is safe to combine the two nodes. The reason is that we know that if the graph
-        // is colorable, we have fewer than K adjacents with high order and there is a color
-        // for the current node.
-        ASSERT(u != v);
-        ASSERT(!isPrecolored(u));
-        ASSERT(!isPrecolored(v));
-
-        const auto&amp; adjacentsOfU = m_adjacencyList[u];
-        const auto&amp; adjacentsOfV = m_adjacencyList[v];
-
-        if (adjacentsOfU.size() + adjacentsOfV.size() &lt; registerCount()) {
-            // Shortcut: if the total number of adjacents is less than the number of register, the condition is always met.
-            return true;
-        }
-
-        HashSet&lt;IndexType&gt; highOrderAdjacents;
-
-        for (IndexType adjacentTmpIndex : adjacentsOfU) {
-            ASSERT(adjacentTmpIndex != v);
-            ASSERT(adjacentTmpIndex != u);
-            if (!hasBeenSimplified(adjacentTmpIndex) &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= registerCount()) {
-                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
-                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= registerCount())
-                    return false;
-            }
-        }
-        for (IndexType adjacentTmpIndex : adjacentsOfV) {
-            ASSERT(adjacentTmpIndex != u);
-            ASSERT(adjacentTmpIndex != v);
-            if (!hasBeenSimplified(adjacentTmpIndex) &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= registerCount()) {
-                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
-                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= registerCount())
-                    return false;
-            }
-        }
-
-        ASSERT(highOrderAdjacents.size() &lt; registerCount());
-        return true;
-    }
-
-    bool precoloredCoalescingHeuristic(IndexType u, IndexType v)
-    {
-        if (traceDebug)
-            dataLog(&quot;    Checking precoloredCoalescingHeuristic\n&quot;);
-        ASSERT(isPrecolored(u));
-        ASSERT(!isPrecolored(v));
-        
-        // If u is a pinned register then it's always safe to coalesce. Note that when we call this,
-        // we have already proved that there is no interference between u and v.
-        if (!m_mutableRegs.get(m_coloredTmp[u]))
-            return true;
-
-        // If any adjacent of the non-colored node is not an adjacent of the colored node AND has a degree &gt;= K
-        // there is a risk that this node needs to have the same color as our precolored node. If we coalesce such
-        // move, we may create an uncolorable graph.
-        const auto&amp; adjacentsOfV = m_adjacencyList[v];
-        for (unsigned adjacentTmpIndex : adjacentsOfV) {
-            if (!isPrecolored(adjacentTmpIndex)
-                &amp;&amp; !hasBeenSimplified(adjacentTmpIndex)
-                &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= registerCount()
-                &amp;&amp; !hasInterferenceEdge(InterferenceEdge(u, adjacentTmpIndex)))
-                return false;
-        }
-        return true;
-    }
-
-    IndexType selectSpill()
-    {
-        if (!m_hasSelectedSpill) {
-            m_hasSelectedSpill = true;
-
-            if (m_hasCoalescedNonTrivialMove)
-                m_coalescedTmpsAtSpill = m_coalescedTmps;
-        }
-
-        auto iterator = m_spillWorklist.begin();
-
-        RELEASE_ASSERT_WITH_MESSAGE(iterator != m_spillWorklist.end(), &quot;selectSpill() called when there was no spill.&quot;);
-        RELEASE_ASSERT_WITH_MESSAGE(!m_unspillableTmps.contains(*iterator), &quot;trying to spill unspillable tmp&quot;);
-
-        // Higher score means more desirable to spill. Lower scores maximize the likelihood that a tmp
-        // gets a register.
-        auto score = [&amp;] (Tmp tmp) -&gt; double {
-            // Air exposes the concept of &quot;fast tmps&quot;, and we interpret that to mean that the tmp
-            // should always be in a register.
-            if (m_code.isFastTmp(tmp))
-                return 0;
-            
-            // All else being equal, the score should be directly related to the degree.
-            double degree = static_cast&lt;double&gt;(m_degrees[TmpMapper::absoluteIndex(tmp)]);
-
-            // All else being equal, the score should be inversely related to the number of warm uses and
-            // defs.
-            const UseCounts&lt;Tmp&gt;::Counts* counts = m_useCounts[tmp];
-            if (!counts)
-                return std::numeric_limits&lt;double&gt;::infinity();
-            
-            double uses = counts-&gt;numWarmUses + counts-&gt;numDefs;
-
-            // If it's a constant, then it's not as bad to spill. We can rematerialize it in many
-            // cases.
-            if (counts-&gt;numConstDefs == 1 &amp;&amp; counts-&gt;numDefs == 1)
-                uses /= 2;
-
-            return degree / uses;
-        };
-
-        auto victimIterator = iterator;
-        double maxScore = score(TmpMapper::tmpFromAbsoluteIndex(*iterator));
-
-        ++iterator;
-        for (;iterator != m_spillWorklist.end(); ++iterator) {
-            double tmpScore = score(TmpMapper::tmpFromAbsoluteIndex(*iterator));
-            if (tmpScore &gt; maxScore) {
-                ASSERT(!m_unspillableTmps.contains(*iterator));
-                victimIterator = iterator;
-                maxScore = tmpScore;
-            }
-        }
-
-        IndexType victimIndex = *victimIterator;
-        if (traceDebug)
-            dataLogLn(&quot;Selecting spill &quot;, victimIndex);
-        ASSERT(!isPrecolored(victimIndex));
-        return victimIndex;
-    }
-
-    void assignColors()
-    {
-        ASSERT(m_simplifyWorklist.isEmpty());
-        ASSERT(!m_spillWorklist.bitCount());
-
-        // Reclaim as much memory as possible.
-        m_interferenceEdges.clear();
-
-        m_degrees.clear();
-        m_moveList.clear();
-        m_simplifyWorklist.clear();
-        m_spillWorklist.clearAll();
-
-        // Try to color the Tmp on the stack.
-        m_coloredTmp.resize(m_adjacencyList.size());
-
-        while (!m_selectStack.isEmpty()) {
-            unsigned tmpIndex = m_selectStack.takeLast();
-            m_isOnSelectStack.quickClear(tmpIndex);
-            ASSERT(!isPrecolored(tmpIndex));
-            ASSERT(!m_coloredTmp[tmpIndex]);
-            RELEASE_ASSERT(getAlias(tmpIndex) == tmpIndex);
-
-            RegisterSet coloredRegisters;
-            for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
-                IndexType aliasTmpIndex = getAlias(adjacentTmpIndex);
-                Reg reg = m_coloredTmp[aliasTmpIndex];
-
-                ASSERT(!isPrecolored(aliasTmpIndex) || (isPrecolored(aliasTmpIndex) &amp;&amp; reg));
-
-                if (reg)
-                    coloredRegisters.set(reg);
-            }
-
-            bool colorAssigned = false;
-            for (Reg reg : m_regsInPriorityOrder) {
-                if (!coloredRegisters.get(reg)) {
-                    m_coloredTmp[tmpIndex] = reg;
-                    colorAssigned = true;
-                    break;
-                }
-            }
-
-            if (!colorAssigned)
-                m_spilledTmps.append(tmpIndex);
-        }
-
-        m_selectStack.clear();
-
-        if (m_spilledTmps.isEmpty())
-            m_coalescedTmpsAtSpill.clear();
-        else
-            m_coloredTmp.clear();
-    }
-
-    void dumpInterferenceGraphInDot(PrintStream&amp; out)
-    {
-        out.print(&quot;graph InterferenceGraph { \n&quot;);
-
-        HashSet&lt;Tmp&gt; tmpsWithInterferences;
-        for (const auto&amp; edge : m_interferenceEdges) {
-            tmpsWithInterferences.add(TmpMapper::tmpFromAbsoluteIndex(edge.first()));
-            tmpsWithInterferences.add(TmpMapper::tmpFromAbsoluteIndex(edge.second()));
-        }
-
-        for (const auto&amp; tmp : tmpsWithInterferences) {
-            unsigned tmpIndex = TmpMapper::absoluteIndex(tmp);
-            if (tmpIndex &lt; m_degrees.size())
-                out.print(&quot;    &quot;, tmp.internalValue(), &quot; [label=\&quot;&quot;, tmp, &quot; (&quot;, m_degrees[tmpIndex], &quot;)\&quot;];\n&quot;);
-            else
-                out.print(&quot;    &quot;, tmp.internalValue(), &quot; [label=\&quot;&quot;, tmp, &quot;\&quot;];\n&quot;);
-        }
-
-        for (const auto&amp; edge : m_interferenceEdges)
-            out.print(&quot;    &quot;, edge.first(), &quot; -- &quot;, edge.second(), &quot;;\n&quot;);
-        out.print(&quot;}\n&quot;);
-    }
-
-    // Interference edges are not directed. An edge between any two Tmps is represented
-    // by the concatenated values of the smallest Tmp followed by the bigger Tmp.
-    class InterferenceEdge {
-    public:
-        InterferenceEdge()
-        {
-        }
-
-        InterferenceEdge(IndexType a, IndexType b)
-        {
-            ASSERT(a);
-            ASSERT(b);
-            ASSERT_WITH_MESSAGE(a != b, &quot;A Tmp can never interfere with itself. Doing so would force it to be the superposition of two registers.&quot;);
-
-            if (b &lt; a)
-                std::swap(a, b);
-            m_value = static_cast&lt;uint64_t&gt;(a) &lt;&lt; 32 | b;
-        }
-
-        InterferenceEdge(WTF::HashTableDeletedValueType)
-            : m_value(std::numeric_limits&lt;uint64_t&gt;::max())
-        {
-        }
-
-        IndexType first() const
-        {
-            return m_value &gt;&gt; 32 &amp; 0xffffffff;
-        }
-
-        IndexType second() const
-        {
-            return m_value &amp; 0xffffffff;
-        }
-
-        bool operator==(const InterferenceEdge other) const
-        {
-            return m_value == other.m_value;
-        }
-
-        bool isHashTableDeletedValue() const
-        {
-            return *this == InterferenceEdge(WTF::HashTableDeletedValue);
-        }
-
-        unsigned hash() const
-        {
-            return WTF::IntHash&lt;uint64_t&gt;::hash(m_value);
-        }
-
-        void dump(PrintStream&amp; out) const
-        {
-            out.print(first(), &quot;&lt;=&gt;&quot;, second());
-        }
-
-    private:
-        uint64_t m_value { 0 };
-    };
-
-    bool addInterferenceEdge(InterferenceEdge edge)
-    {
-        return m_interferenceEdges.add(edge).isNewEntry;
-    }
-
-    bool hasInterferenceEdge(InterferenceEdge edge)
-    {
-        return m_interferenceEdges.contains(edge);
-    }
-
-    struct InterferenceEdgeHash {
-        static unsigned hash(const InterferenceEdge&amp; key) { return key.hash(); }
-        static bool equal(const InterferenceEdge&amp; a, const InterferenceEdge&amp; b) { return a == b; }
-        static const bool safeToCompareToEmptyOrDeleted = true;
-    };
-    typedef SimpleClassHashTraits&lt;InterferenceEdge&gt; InterferenceEdgeHashTraits;
-
-    const Vector&lt;Reg&gt;&amp; m_regsInPriorityOrder;
-    RegisterSet m_mutableRegs;
-    IndexType m_lastPrecoloredRegisterIndex { 0 };
-
-    // The interference graph.
-    HashSet&lt;InterferenceEdge, InterferenceEdgeHash, InterferenceEdgeHashTraits&gt; m_interferenceEdges;
-
-    Vector&lt;Vector&lt;IndexType, 0, UnsafeVectorOverflow, 4&gt;, 0, UnsafeVectorOverflow&gt; m_adjacencyList;
-    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_degrees;
-
-    // Instead of keeping track of the move instructions, we just keep their operands around and use the index
-    // in the vector as the &quot;identifier&quot; for the move.
-    struct MoveOperands {
-        IndexType srcIndex;
-        IndexType dstIndex;
-    };
-    Vector&lt;MoveOperands, 0, UnsafeVectorOverflow&gt; m_coalescingCandidates;
-
-    // List of every move instruction associated with a Tmp.
-    Vector&lt;HashSet&lt;IndexType, typename DefaultHash&lt;IndexType&gt;::Hash, WTF::UnsignedWithZeroKeyHashTraits&lt;IndexType&gt;&gt;&gt; m_moveList;
-
-    // Colors.
-    Vector&lt;Reg, 0, UnsafeVectorOverflow&gt; m_coloredTmp;
-    Vector&lt;IndexType&gt; m_spilledTmps;
-
-    // Values that have been coalesced with an other value.
-    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_coalescedTmps;
-
-    // The stack of Tmp removed from the graph and ready for coloring.
-    BitVector m_isOnSelectStack;
-    Vector&lt;IndexType&gt; m_selectStack;
-
-    IndexType m_framePointerIndex { 0 };
-    BitVector m_interferesWithFramePointer;
-    // Low-degree, non-Move related.
-    Vector&lt;IndexType&gt; m_simplifyWorklist;
-    // High-degree Tmp.
-    BitVector m_spillWorklist;
-
-    bool m_hasSelectedSpill { false };
-    bool m_hasCoalescedNonTrivialMove { false };
-
-    // The mapping of Tmp to their alias for Moves that are always coalescing regardless of spilling.
-    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_coalescedTmpsAtSpill;
-
-    const HashSet&lt;unsigned&gt;&amp; m_unspillableTmps;
-    const UseCounts&lt;Tmp&gt;&amp; m_useCounts;
-    Code&amp; m_code;
-};
-
-template &lt;typename IndexType, typename TmpMapper&gt;
-class Briggs : public AbstractColoringAllocator&lt;IndexType, TmpMapper&gt; {
-    using Base = AbstractColoringAllocator&lt;IndexType, TmpMapper&gt;;
-protected:
-    using Base::m_isOnSelectStack;
-    using Base::m_selectStack;
-    using Base::m_framePointerIndex;
-    using Base::m_interferesWithFramePointer;
-    using Base::m_simplifyWorklist;
-    using Base::m_spillWorklist;
-    using Base::m_hasSelectedSpill;
-    using Base::m_hasCoalescedNonTrivialMove;
-    using Base::m_degrees;
-    using Base::registerCount;
-    using Base::m_coalescedTmps;
-    using Base::m_moveList;
-    using Base::m_useCounts;
-    using Base::m_coalescedTmpsAtSpill;
-    using Base::m_spilledTmps;
-    using MoveOperands = typename Base::MoveOperands;
-    using Base::m_coalescingCandidates;
-    using Base::m_lastPrecoloredRegisterIndex;
-    using Base::m_coloredTmp;
-    using Base::m_code;
-    using InterferenceEdge = typename Base::InterferenceEdge;
-    using Base::m_unspillableTmps;
-    using Base::hasInterferenceEdge;
-    using Base::getAlias;
-    using Base::addEdge;
-    using Base::isPrecolored;
-    using Base::canBeSafelyCoalesced;
-    using Base::addEdgeDistinctWithoutDegreeChange;
-    using Base::forEachAdjacent;
-    using Base::hasBeenSimplified;
-    using Base::addToSpill;
-    using Base::m_interferenceEdges;
-
-public:
-    Briggs(Code&amp; code, const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize, const HashSet&lt;unsigned&gt;&amp; unspillableTmps, const UseCounts&lt;Tmp&gt;&amp; useCounts)
-        : Base(code, regsInPriorityOrder, lastPrecoloredRegisterIndex, tmpArraySize, unspillableTmps, useCounts)
-    {
-    }
-
-    void allocate()
-    {
-        bool changed = false;
-
-        auto coalesceMove = [&amp;] (unsigned&amp; move) {
-            bool coalesced = coalesce(move);
-            if (coalesced) {
-                changed = true;
-                // We won't need to handle this move in the future since it's already coalesced.
-                move = UINT_MAX;
-            }
-        };
-
-        // We first coalesce until we can't coalesce any more.
-        do {
-            changed = false;
-            m_worklistMoves.forEachMove(coalesceMove);
-        } while (changed);
-        do {
-            changed = false;
-            m_worklistMoves.forEachLowPriorityMove(coalesceMove);
-        } while (changed);
-
-        // Then we create our select stack. The invariant we start with here is that nodes in
-        // the interference graph with degree &gt;= k are on the spill list. Nodes with degree &lt; k
-        // are on the simplify worklist. A node can move from the spill list to the simplify
-        // list (but not the other way around, note that this is different than IRC because IRC
-        // runs this while coalescing, but we do all our coalescing before this). Once a node is
-        // added to the select stack, it's not on either list, but only on the select stack.
-        // Once on the select stack, logically, it's no longer in the interference graph.
-        auto assertInvariants = [&amp;] () {
-            if (ASSERT_DISABLED)
-                return;
-
-            IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
-            unsigned registerCount = this-&gt;registerCount();
-            for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
-                if (getAlias(i) != i)
-                    continue;
-                if (m_isOnSelectStack.contains(i)) {
-                    ASSERT(!m_simplifyWorklist.contains(i) &amp;&amp; !m_spillWorklist.contains(i));
-                    continue;
-                }
-                unsigned degree = m_degrees[i];
-                if (degree &gt;= registerCount) {
-                    ASSERT(m_unspillableTmps.contains(i) || m_spillWorklist.contains(i));
-                    ASSERT(!m_simplifyWorklist.contains(i));
-                    continue;
-                }
-                ASSERT(m_simplifyWorklist.contains(i));
-            }
-        };
-
-        makeInitialWorklist();
-        assertInvariants();
-        do {
-            changed = false;
-
-            while (m_simplifyWorklist.size()) {
-                simplify();
-                assertInvariants();
-            }
-
-            if (m_spillWorklist.bitCount()) {
-                selectSpill();
-                changed = true;
-                ASSERT(m_simplifyWorklist.size() == 1);
-            }
-        } while (changed);
-
-        if (!ASSERT_DISABLED) {
-            ASSERT(!m_simplifyWorklist.size());
-            ASSERT(!m_spillWorklist.bitCount());
-            IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
-            for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i)
-                ASSERT(hasBeenSimplified(i));
-        }
-
-        assignColors();
-    }
-
-protected:
-    
-    bool coalesce(unsigned&amp; moveIndex)
-    {
-        const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
-        IndexType u = getAlias(moveOperands.srcIndex);
-        IndexType v = getAlias(moveOperands.dstIndex);
-
-        if (isPrecolored(v))
-            std::swap(u, v);
-
-        if (traceDebug)
-            dataLog(&quot;Coalescing move at index &quot;, moveIndex, &quot; u = &quot;, TmpMapper::tmpFromAbsoluteIndex(u), &quot; v = &quot;, TmpMapper::tmpFromAbsoluteIndex(v), &quot;    &quot;);
-
-        if (u == v) {
-            if (traceDebug)
-                dataLog(&quot;Already Coalesced. They're equal.\n&quot;);
-            return false;
-        }
-
-        if (isPrecolored(v)
-            || hasInterferenceEdge(InterferenceEdge(u, v))
-            || (u == m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))) {
-
-            // No need to ever consider this move again if it interferes.
-            // No coalescing will remove the interference.
-            moveIndex = UINT_MAX;
-
-            if (!ASSERT_DISABLED) {
-                if (isPrecolored(v))
-                    ASSERT(isPrecolored(u));
-            }
-
-            if (traceDebug)
-                dataLog(&quot;Constrained\n&quot;);
-
-            return false;
-        }
-        
-        if (canBeSafelyCoalesced(u, v)) {
-            combine(u, v);
-            m_hasCoalescedNonTrivialMove = true;
-
-            if (traceDebug)
-                dataLog(&quot;    Safe Coalescing\n&quot;);
-            return true;
-        }
-
-        if (traceDebug)
-            dataLog(&quot;    Failed coalescing.\n&quot;);
-
-        return false;
-    }
-
-    void combine(IndexType u, IndexType v)
-    {
-        ASSERT(!m_coalescedTmps[v]);
-        m_coalescedTmps[v] = u;
-
-        auto&amp; vMoves = m_moveList[v];
-        m_moveList[u].add(vMoves.begin(), vMoves.end());
-
-        forEachAdjacent(v, [this, u] (IndexType adjacentTmpIndex) {
-            if (addEdgeDistinctWithoutDegreeChange(adjacentTmpIndex, u)) {
-                // If we added a new edge between the adjacentTmp and u, it replaces the edge
-                // that existed with v.
-                // The degree of adjacentTmp remains the same since the edge just changed from u to v.
-                // All we need to do is update the degree of u.
-                if (!isPrecolored(u))
-                    m_degrees[u]++;
-            } else {
-                // If we already had an edge between the adjacentTmp and u, the degree of u
-                // is already correct. The degree of the adjacentTmp decreases since the edge
-                // with v is no longer relevant (we can think of it as merged with the edge with u).
-                decrementDegree(adjacentTmpIndex);
-            }
-        });
-
-        if (m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))
-            m_interferesWithFramePointer.quickSet(u);
-    }
-
-
-    void makeInitialWorklist()
-    {
-        m_simplifyWorklist.clear();
-        m_spillWorklist.clearAll();
-
-        if (traceDebug)
-            dataLogLn(&quot;------------------\nMaking initial worklist&quot;);
-
-        IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
-        unsigned registerCount = this-&gt;registerCount();
-        for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
-            ASSERT(!isPrecolored(i));
-            if (hasBeenSimplified(i))
-                continue;
-            unsigned degree = m_degrees[i];
-            if (degree &lt; registerCount) {
-                if (traceDebug)
-                    dataLogLn(&quot;Adding &quot;, TmpMapper::tmpFromAbsoluteIndex(i), &quot; to simplify worklist&quot;);
-                m_simplifyWorklist.append(i);
-            } else {
-                if (traceDebug)
-                    dataLogLn(&quot;Adding &quot;, TmpMapper::tmpFromAbsoluteIndex(i), &quot; to spill worklist&quot;);
-                addToSpill(i);
-            }
-        }
-    }
-
-    // Low-degree vertex can always be colored: just pick any of the color taken by any
-    // other adjacent verices.
-    // The &quot;Simplify&quot; phase takes a low-degree out of the interference graph to simplify it.
-    void simplify()
-    {
-        IndexType lastIndex = m_simplifyWorklist.takeLast();
-
-        ASSERT(!m_selectStack.contains(lastIndex));
-        ASSERT(!m_isOnSelectStack.get(lastIndex));
-        ASSERT(!m_spillWorklist.contains(lastIndex));
-
-        m_selectStack.append(lastIndex);
-        m_isOnSelectStack.quickSet(lastIndex);
-
-        if (traceDebug)
-            dataLogLn(&quot;Simplifying &quot;, lastIndex, &quot; by adding it to select stack&quot;);
-
-        forEachAdjacent(lastIndex, [this](IndexType adjacentTmpIndex) {
-            decrementDegreeInSimplification(adjacentTmpIndex);
-        });
-    }
-
-    void selectSpill()
-    {
-        IndexType victimIndex = Base::selectSpill();
-        m_spillWorklist.quickClear(victimIndex);
-        m_simplifyWorklist.append(victimIndex);
-    }
-
-    struct MoveSet {
-        unsigned addMove()
-        {
-            ASSERT(m_lowPriorityMoveList.isEmpty());
-
-            unsigned nextIndex = m_positionInMoveList++;
-            m_moveList.append(nextIndex);
-            return nextIndex;
-        }
-
-        void startAddingLowPriorityMoves()
-        {
-            ASSERT(m_lowPriorityMoveList.isEmpty());
-        }
-
-        unsigned addLowPriorityMove()
-        {
-            unsigned nextIndex = m_positionInMoveList++;
-            m_lowPriorityMoveList.append(nextIndex);
-
-            return nextIndex;
-        }
-
-        // We use references to moves here because the function
-        // may do coalescing, indicating it doesn't need to see
-        // the move again.
-        template &lt;typename Function&gt;
-        void forEachMove(Function function)
-        {
-            for (unsigned&amp; move : m_moveList) {
-                if (move != UINT_MAX)
-                    function(move);
-            }
-        }
-
-        template &lt;typename Function&gt;
-        void forEachLowPriorityMove(Function function)
-        {
-            for (unsigned&amp; move : m_lowPriorityMoveList) {
-                if (move != UINT_MAX)
-                    function(move);
-            }
-        }
-
-        void clear()
-        {
-            m_positionInMoveList = 0;
-            m_moveList.clear();
-            m_lowPriorityMoveList.clear();
-        }
-
-    private:
-        unsigned m_positionInMoveList;
-        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_moveList;
-        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_lowPriorityMoveList;
-    };
-
-    void decrementDegree(IndexType tmpIndex)
-    {
-        ASSERT(m_degrees[tmpIndex]);
-        --m_degrees[tmpIndex];
-    }
-
-    void decrementDegreeInSimplification(IndexType tmpIndex)
-    {
-        ASSERT(m_degrees[tmpIndex]);
-        unsigned oldDegree = m_degrees[tmpIndex]--;
-
-        if (oldDegree == registerCount()) {
-            ASSERT(m_degrees[tmpIndex] &lt; registerCount());
-            if (traceDebug)
-                dataLogLn(&quot;Moving tmp &quot;, tmpIndex, &quot; from spill list to simplify list because it's degree is now less than k&quot;);
-
-            if (!ASSERT_DISABLED)
-                ASSERT(m_unspillableTmps.contains(tmpIndex) || m_spillWorklist.contains(tmpIndex));
-            m_spillWorklist.quickClear(tmpIndex);
-
-            ASSERT(!m_simplifyWorklist.contains(tmpIndex));
-            m_simplifyWorklist.append(tmpIndex);
-        }
-    }
-
-    void assignColors()
-    {
-        m_worklistMoves.clear();
-        Base::assignColors();
-    }
-
-    // Set of &quot;move&quot; enabled for possible coalescing.
-    MoveSet m_worklistMoves;
-};
-
-template &lt;typename IndexType, typename TmpMapper&gt;
-class IRC : public AbstractColoringAllocator&lt;IndexType, TmpMapper&gt; {
-    using Base = AbstractColoringAllocator&lt;IndexType, TmpMapper&gt;;
-protected:
-    using Base::m_isOnSelectStack;
-    using Base::m_selectStack;
-    using Base::m_framePointerIndex;
-    using Base::m_interferesWithFramePointer;
-    using Base::m_simplifyWorklist;
-    using Base::m_spillWorklist;
-    using Base::m_hasSelectedSpill;
-    using Base::m_hasCoalescedNonTrivialMove;
-    using Base::m_degrees;
-    using Base::registerCount;
-    using Base::m_coalescedTmps;
-    using Base::m_moveList;
-    using Base::m_useCounts;
-    using Base::m_coalescedTmpsAtSpill;
-    using Base::m_spilledTmps;
-    using MoveOperands = typename Base::MoveOperands;
-    using Base::m_coalescingCandidates;
-    using Base::m_lastPrecoloredRegisterIndex;
-    using Base::m_coloredTmp;
-    using Base::m_code;
-    using InterferenceEdge = typename Base::InterferenceEdge;
-    using Base::m_unspillableTmps;
-    using Base::hasInterferenceEdge;
-    using Base::getAlias;
-    using Base::addEdge;
-    using Base::isPrecolored;
-    using Base::canBeSafelyCoalesced;
-    using Base::addEdgeDistinctWithoutDegreeChange;
-    using Base::forEachAdjacent;
-    using Base::hasBeenSimplified;
-    using Base::addToSpill;
-    using Base::m_interferenceEdges;
-    using Base::m_adjacencyList;
-    using Base::dumpInterferenceGraphInDot;
-
-public:
-    IRC(Code&amp; code, const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize, const HashSet&lt;unsigned&gt;&amp; unspillableTmps, const UseCounts&lt;Tmp&gt;&amp; useCounts)
-        : Base(code, regsInPriorityOrder, lastPrecoloredRegisterIndex, tmpArraySize, unspillableTmps, useCounts)
-    {
-    }
-
-    void allocate()
-    {
-        m_activeMoves.ensureSize(m_worklistMoves.totalNumberOfMoves());
-        ASSERT_WITH_MESSAGE(m_activeMoves.size() &gt;= m_coalescingCandidates.size(), &quot;The activeMove set should be big enough for the quick operations of BitVector.&quot;);
-
-        makeWorkList();
-
-        if (debug) {
-            dumpInterferenceGraphInDot(WTF::dataFile());
-            dataLog(&quot;Coalescing candidates:\n&quot;);
-            for (MoveOperands&amp; moveOp : m_coalescingCandidates) {
-                dataLog(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(moveOp.srcIndex),
-                    &quot; -&gt; &quot;, TmpMapper::tmpFromAbsoluteIndex(moveOp.dstIndex), &quot;\n&quot;);
-            }
-            dataLog(&quot;Initial work list\n&quot;);
-            dumpWorkLists(WTF::dataFile());
-        }
-
-        do {
-            if (traceDebug) {
-                dataLog(&quot;Before Graph simplification iteration\n&quot;);
-                dumpWorkLists(WTF::dataFile());
-            }
-
-            if (!m_simplifyWorklist.isEmpty())
-                simplify();
-            else if (!m_worklistMoves.isEmpty())
-                coalesce();
-            else if (!m_freezeWorklist.isEmpty())
-                freeze();
-            else if (m_spillWorklist.bitCount())
-                selectSpill();
-
-            if (traceDebug) {
-                dataLog(&quot;After Graph simplification iteration\n&quot;);
-                dumpWorkLists(WTF::dataFile());
-            }
-        } while (!m_simplifyWorklist.isEmpty() || !m_worklistMoves.isEmpty() || !m_freezeWorklist.isEmpty() || m_spillWorklist.bitCount());
-
-        assignColors();
-    }
-
-protected:
-
-    void makeWorkList()
-    {
-        IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
-        for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
-            unsigned degree = m_degrees[i];
-            if (degree &gt;= registerCount())
-                addToSpill(i);
-            else if (!m_moveList[i].isEmpty())
-                m_freezeWorklist.add(i);
-            else
-                m_simplifyWorklist.append(i);
-        }
-    }
-
-    // Low-degree vertex can always be colored: just pick any of the color taken by any
-    // other adjacent verices.
-    // The &quot;Simplify&quot; phase takes a low-degree out of the interference graph to simplify it.
-    void simplify()
-    {
-        IndexType lastIndex = m_simplifyWorklist.takeLast();
-
-        ASSERT(!m_selectStack.contains(lastIndex));
-        ASSERT(!m_isOnSelectStack.get(lastIndex));
-        m_selectStack.append(lastIndex);
-        m_isOnSelectStack.quickSet(lastIndex);
-
-        forEachAdjacent(lastIndex, [this](IndexType adjacentTmpIndex) {
-            decrementDegree(adjacentTmpIndex);
-        });
-    }
-
-    void coalesce()
-    {
-        unsigned moveIndex = m_worklistMoves.takeLastMove();
-        const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
-        IndexType u = getAlias(moveOperands.srcIndex);
-        IndexType v = getAlias(moveOperands.dstIndex);
-
-        if (isPrecolored(v))
-            std::swap(u, v);
-
-        if (traceDebug)
-            dataLog(&quot;Coalescing move at index &quot;, moveIndex, &quot; u = &quot;, u, &quot; v = &quot;, v, &quot;\n&quot;);
-
-        if (u == v) {
-            addWorkList(u);
-
-            if (traceDebug)
-                dataLog(&quot;    Coalesced\n&quot;);
-        } else if (isPrecolored(v)
-            || hasInterferenceEdge(InterferenceEdge(u, v))
-            || (u == m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))) {
-            addWorkList(u);
-            addWorkList(v);
-
-            if (traceDebug)
-                dataLog(&quot;    Constrained\n&quot;);
-        } else if (canBeSafelyCoalesced(u, v)) {
-            combine(u, v);
-            addWorkList(u);
-            m_hasCoalescedNonTrivialMove = true;
-
-            if (traceDebug)
-                dataLog(&quot;    Safe Coalescing\n&quot;);
-        } else {
-            m_activeMoves.quickSet(moveIndex);
-            if (traceDebug)
-                dataLog(&quot;    Failed coalescing, added to active moves.\n&quot;);
-        }
-    }
-
-    void addWorkList(IndexType tmpIndex)
-    {
-        if (!isPrecolored(tmpIndex) &amp;&amp; m_degrees[tmpIndex] &lt; registerCount() &amp;&amp; !isMoveRelated(tmpIndex)) {
-            m_freezeWorklist.remove(tmpIndex);
-            m_simplifyWorklist.append(tmpIndex);
-        }
-    }
-
-    void combine(IndexType u, IndexType v)
-    {
-        if (!m_freezeWorklist.remove(v))
-            m_spillWorklist.quickClear(v);
-
-        ASSERT(!m_coalescedTmps[v]);
-        m_coalescedTmps[v] = u;
-
-        auto&amp; vMoves = m_moveList[v];
-        m_moveList[u].add(vMoves.begin(), vMoves.end());
-
-        forEachAdjacent(v, [this, u] (IndexType adjacentTmpIndex) {
-            if (addEdgeDistinctWithoutDegreeChange(adjacentTmpIndex, u)) {
-                // If we added a new edge between the adjacentTmp and u, it replaces the edge
-                // that existed with v.
-                // The degree of adjacentTmp remains the same since the edge just changed from u to v.
-                // All we need to do is update the degree of u.
-                if (!isPrecolored(u))
-                    m_degrees[u]++;
-            } else {
-                // If we already had an edge between the adjacentTmp and u, the degree of u
-                // is already correct. The degree of the adjacentTmp decreases since the edge
-                // with v is no longer relevant (we can think of it as merged with the edge with u).
-                decrementDegree(adjacentTmpIndex);
-            }
-        });
-
-        if (m_framePointerIndex &amp;&amp; m_interferesWithFramePointer.quickGet(v))
-            m_interferesWithFramePointer.quickSet(u);
-
-        if (m_degrees[u] &gt;= registerCount() &amp;&amp; m_freezeWorklist.remove(u))
-            addToSpill(u);
-    }
-
-    void freeze()
-    {
-        IndexType victimIndex = m_freezeWorklist.takeAny();
-        ASSERT_WITH_MESSAGE(getAlias(victimIndex) == victimIndex, &quot;coalesce() should not leave aliased Tmp in the worklist.&quot;);
-        m_simplifyWorklist.append(victimIndex);
-        freezeMoves(victimIndex);
-    }
-
-    void freezeMoves(IndexType tmpIndex)
-    {
-        forEachNodeMoves(tmpIndex, [this, tmpIndex] (IndexType moveIndex) {
-            if (!m_activeMoves.quickClear(moveIndex))
-                m_worklistMoves.takeMove(moveIndex);
-
-            const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
-            IndexType srcTmpIndex = moveOperands.srcIndex;
-            IndexType dstTmpIndex = moveOperands.dstIndex;
-
-            IndexType originalOtherTmp = srcTmpIndex != tmpIndex ? srcTmpIndex : dstTmpIndex;
-            IndexType otherTmpIndex = getAlias(originalOtherTmp);
-            if (m_degrees[otherTmpIndex] &lt; registerCount() &amp;&amp; !isMoveRelated(otherTmpIndex)) {
-                if (m_freezeWorklist.remove(otherTmpIndex))
-                    m_simplifyWorklist.append(otherTmpIndex);
-            }
-        });
-    }
-
-    void decrementDegree(IndexType tmpIndex)
-    {
-        ASSERT(m_degrees[tmpIndex]);
-
-        unsigned oldDegree = m_degrees[tmpIndex]--;
-        if (oldDegree == registerCount()) {
-            enableMovesOnValueAndAdjacents(tmpIndex);
-            m_spillWorklist.quickClear(tmpIndex);
-            if (isMoveRelated(tmpIndex))
-                m_freezeWorklist.add(tmpIndex);
-            else
-                m_simplifyWorklist.append(tmpIndex);
-        }
-    }
-
-    void selectSpill()
-    {
-        IndexType victimIndex = Base::selectSpill();
-        m_spillWorklist.quickClear(victimIndex);
-        m_simplifyWorklist.append(victimIndex);
-        freezeMoves(victimIndex);
-    }
-
-    void assignColors()
-    {
-        ASSERT(m_freezeWorklist.isEmpty());
-        m_worklistMoves.clear();
-        Base::assignColors();
-    }
-
-
-    bool isMoveRelated(IndexType tmpIndex)
-    {
-        for (unsigned moveIndex : m_moveList[tmpIndex]) {
-            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
-                return true;
-        }
-        return false;
-    }
-
-    template&lt;typename Function&gt;
-    void forEachNodeMoves(IndexType tmpIndex, Function function)
-    {
-        for (unsigned moveIndex : m_moveList[tmpIndex]) {
-            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
-                function(moveIndex);
-        }
-    }
-
-    void enableMovesOnValue(IndexType tmpIndex)
-    {
-        for (unsigned moveIndex : m_moveList[tmpIndex]) {
-            if (m_activeMoves.quickClear(moveIndex))
-                m_worklistMoves.returnMove(moveIndex);
-        }
-    }
-
-    void enableMovesOnValueAndAdjacents(IndexType tmpIndex)
-    {
-        enableMovesOnValue(tmpIndex);
-
-        forEachAdjacent(tmpIndex, [this] (IndexType adjacentTmpIndex) {
-            enableMovesOnValue(adjacentTmpIndex);
-        });
-    }
-
-    struct OrderedMoveSet {
-        unsigned addMove()
-        {
-            ASSERT(m_lowPriorityMoveList.isEmpty());
-            ASSERT(!m_firstLowPriorityMoveIndex);
-
-            unsigned nextIndex = m_positionInMoveList.size();
-            unsigned position = m_moveList.size();
-            m_moveList.append(nextIndex);
-            m_positionInMoveList.append(position);
-            return nextIndex;
-        }
-
-        void startAddingLowPriorityMoves()
-        {
-            ASSERT(m_lowPriorityMoveList.isEmpty());
-            m_firstLowPriorityMoveIndex = m_moveList.size();
-        }
-
-        unsigned addLowPriorityMove()
-        {
-            ASSERT(m_firstLowPriorityMoveIndex == m_moveList.size());
-
-            unsigned nextIndex = m_positionInMoveList.size();
-            unsigned position = m_lowPriorityMoveList.size();
-            m_lowPriorityMoveList.append(nextIndex);
-            m_positionInMoveList.append(position);
-
-            ASSERT(nextIndex &gt;= m_firstLowPriorityMoveIndex);
-
-            return nextIndex;
-        }
-
-        bool isEmpty() const
-        {
-            return m_moveList.isEmpty() &amp;&amp; m_lowPriorityMoveList.isEmpty();
-        }
-
-        bool contains(unsigned index)
-        {
-            return m_positionInMoveList[index] != std::numeric_limits&lt;unsigned&gt;::max();
-        }
-
-        void takeMove(unsigned moveIndex)
-        {
-            unsigned positionInMoveList = m_positionInMoveList[moveIndex];
-            if (positionInMoveList == std::numeric_limits&lt;unsigned&gt;::max())
-                return;
-
-            if (moveIndex &lt; m_firstLowPriorityMoveIndex) {
-                ASSERT(m_moveList[positionInMoveList] == moveIndex);
-                unsigned lastIndex = m_moveList.last();
-                m_positionInMoveList[lastIndex] = positionInMoveList;
-                m_moveList[positionInMoveList] = lastIndex;
-                m_moveList.removeLast();
-            } else {
-                ASSERT(m_lowPriorityMoveList[positionInMoveList] == moveIndex);
-                unsigned lastIndex = m_lowPriorityMoveList.last();
-                m_positionInMoveList[lastIndex] = positionInMoveList;
-                m_lowPriorityMoveList[positionInMoveList] = lastIndex;
-                m_lowPriorityMoveList.removeLast();
-            }
-
-            m_positionInMoveList[moveIndex] = std::numeric_limits&lt;unsigned&gt;::max();
-
-            ASSERT(!contains(moveIndex));
-        }
-
-        unsigned takeLastMove()
-        {
-            ASSERT(!isEmpty());
-
-            unsigned lastIndex;
-            if (!m_moveList.isEmpty()) {
-                lastIndex = m_moveList.takeLast();
-                ASSERT(m_positionInMoveList[lastIndex] == m_moveList.size());
-            } else {
-                lastIndex = m_lowPriorityMoveList.takeLast();
-                ASSERT(m_positionInMoveList[lastIndex] == m_lowPriorityMoveList.size());
-            }
-            m_positionInMoveList[lastIndex] = std::numeric_limits&lt;unsigned&gt;::max();
-
-            ASSERT(!contains(lastIndex));
-            return lastIndex;
-        }
-
-        void returnMove(unsigned index)
-        {
-            // This assertion is a bit strict but that is how the move list should be used. The only kind of moves that can
-            // return to the list are the ones that we previously failed to coalesce with the conservative heuristics.
-            // Values should not be added back if they were never taken out when attempting coalescing.
-            ASSERT(!contains(index));
-
-            if (index &lt; m_firstLowPriorityMoveIndex) {
-                unsigned position = m_moveList.size();
-                m_moveList.append(index);
-                m_positionInMoveList[index] = position;
-            } else {
-                unsigned position = m_lowPriorityMoveList.size();
-                m_lowPriorityMoveList.append(index);
-                m_positionInMoveList[index] = position;
-            }
-
-            ASSERT(contains(index));
-        }
-
-        void clear()
-        {
-            m_positionInMoveList.clear();
-            m_moveList.clear();
-            m_lowPriorityMoveList.clear();
-        }
-
-        unsigned totalNumberOfMoves()
-        {
-            return m_moveList.size() + m_lowPriorityMoveList.size();
-        }
-
-    private:
-        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_positionInMoveList;
-        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_moveList;
-        Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_lowPriorityMoveList;
-        unsigned m_firstLowPriorityMoveIndex { 0 };
-    };
-
-    void dumpWorkLists(PrintStream&amp; out)
-    {
-        out.print(&quot;Simplify work list:\n&quot;);
-        for (unsigned tmpIndex : m_simplifyWorklist)
-            out.print(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
-        out.printf(&quot;Moves work list is empty? %d\n&quot;, m_worklistMoves.isEmpty());
-        out.print(&quot;Freeze work list:\n&quot;);
-        for (unsigned tmpIndex : m_freezeWorklist)
-            out.print(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
-        out.print(&quot;Spill work list:\n&quot;);
-        for (unsigned tmpIndex : m_spillWorklist)
-            out.print(&quot;    &quot;, TmpMapper::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
-    }
-
-    // Work lists.
-    // Low-degree, Move related.
-    HashSet&lt;IndexType&gt; m_freezeWorklist;
-    // Set of &quot;move&quot; enabled for possible coalescing.
-    OrderedMoveSet m_worklistMoves;
-    // Set of &quot;move&quot; not yet ready for coalescing.
-    BitVector m_activeMoves;
-};
-
-// This perform all the tasks that are specific to certain register type.
-template&lt;Arg::Type type, template&lt;typename, typename&gt; class AllocatorType&gt;
-class ColoringAllocator : public AllocatorType&lt;unsigned, AbsoluteTmpMapper&lt;type&gt;&gt; {
-    using TmpMapper = AbsoluteTmpMapper&lt;type&gt;;
-    using Base = AllocatorType&lt;unsigned, TmpMapper&gt;;
-    using Base::m_isOnSelectStack;
-    using Base::m_selectStack;
-    using Base::m_framePointerIndex;
-    using Base::m_interferesWithFramePointer;
-    using Base::m_simplifyWorklist;
-    using Base::m_spillWorklist;
-    using Base::m_hasSelectedSpill;
-    using Base::m_hasCoalescedNonTrivialMove;
-    using Base::m_degrees;
-    using Base::registerCount;
-    using Base::m_coalescedTmps;
-    using Base::m_moveList;
-    using Base::m_useCounts;
-    using Base::m_coalescedTmpsAtSpill;
-    using Base::m_spilledTmps;
-    using MoveOperands = typename Base::MoveOperands;
-    using Base::m_coalescingCandidates;
-    using Base::m_lastPrecoloredRegisterIndex;
-    using Base::m_coloredTmp;
-    using Base::m_code;
-    using InterferenceEdge = typename Base::InterferenceEdge;
-    using Base::m_worklistMoves;
-    using Base::hasInterferenceEdge;
-    using Base::getAlias;
-    using Base::addEdge;
-    using Base::m_interferenceEdges;
-
-public:
-
-    ColoringAllocator(Code&amp; code, TmpWidth&amp; tmpWidth, const UseCounts&lt;Tmp&gt;&amp; useCounts, const HashSet&lt;unsigned&gt;&amp; unspillableTmp)
-        : Base(code, code.regsInPriorityOrder(type), TmpMapper::lastMachineRegisterIndex(), tmpArraySize(code), unspillableTmp, useCounts)
-        , m_tmpWidth(tmpWidth)
-    {
-        if (type == Arg::GP) {
-            m_framePointerIndex = TmpMapper::absoluteIndex(Tmp(MacroAssembler::framePointerRegister));
-            m_interferesWithFramePointer.ensureSize(tmpArraySize(code));
-        }
-
-        initializePrecoloredTmp();
-        build();
-    }
-
-    Tmp getAlias(Tmp tmp) const
-    {
-        return TmpMapper::tmpFromAbsoluteIndex(getAlias(TmpMapper::absoluteIndex(tmp)));
-    }
-
-    // This tells you if a Move will be coalescable if the src and dst end up matching. This method
-    // relies on an analysis that is invalidated by register allocation, so you it's only meaningful to
-    // call this *before* replacing the Tmp's in this Inst with registers or spill slots.
-    bool mayBeCoalescable(const Inst&amp; inst) const
-    {
-        return mayBeCoalescableImpl(inst, &amp;m_tmpWidth);
-    }
-
-    bool isUselessMove(const Inst&amp; inst) const
-    {
-        return mayBeCoalescableImpl(inst, nullptr) &amp;&amp; inst.args[0].tmp() == inst.args[1].tmp();
-    }
-
-    Tmp getAliasWhenSpilling(Tmp tmp) const
-    {
-        ASSERT_WITH_MESSAGE(!m_spilledTmps.isEmpty(), &quot;This function is only valid for coalescing during spilling.&quot;);
-
-        if (m_coalescedTmpsAtSpill.isEmpty())
-            return tmp;
-
-        unsigned aliasIndex = TmpMapper::absoluteIndex(tmp);
-        while (unsigned nextAliasIndex = m_coalescedTmpsAtSpill[aliasIndex])
-            aliasIndex = nextAliasIndex;
-
-        Tmp alias = TmpMapper::tmpFromAbsoluteIndex(aliasIndex);
-
-        ASSERT_WITH_MESSAGE(!m_spilledTmps.contains(aliasIndex) || alias == tmp, &quot;The aliases at spill should always be colorable. Something went horribly wrong.&quot;);
-
-        return alias;
-    }
-
-    template&lt;typename IndexIterator&gt;
-    class IndexToTmpIteratorAdaptor {
-    public:
-        IndexToTmpIteratorAdaptor(IndexIterator&amp;&amp; indexIterator)
-            : m_indexIterator(WTFMove(indexIterator))
-        {
-        }
-
-        Tmp operator*() const { return TmpMapper::tmpFromAbsoluteIndex(*m_indexIterator); }
-        IndexToTmpIteratorAdaptor&amp; operator++() { ++m_indexIterator; return *this; }
-
-        bool operator==(const IndexToTmpIteratorAdaptor&amp; other) const
-        {
-            return m_indexIterator == other.m_indexIterator;
-        }
-
-        bool operator!=(const IndexToTmpIteratorAdaptor&amp; other) const
-        {
-            return !(*this == other);
-        }
-
-    private:
-        IndexIterator m_indexIterator;
-    };
-
-    template&lt;typename Collection&gt;
-    class IndexToTmpIterableAdaptor {
-    public:
-        IndexToTmpIterableAdaptor(const Collection&amp; collection)
-            : m_collection(collection)
-        {
-        }
-
-        IndexToTmpIteratorAdaptor&lt;typename Collection::const_iterator&gt; begin() const
-        {
-            return m_collection.begin();
-        }
-
-        IndexToTmpIteratorAdaptor&lt;typename Collection::const_iterator&gt; end() const
-        {
-            return m_collection.end();
-        }
-
-    private:
-        const Collection&amp; m_collection;
-    };
-
-    IndexToTmpIterableAdaptor&lt;Vector&lt;unsigned&gt;&gt; spilledTmps() const { return m_spilledTmps; }
-
-    bool requiresSpilling() const { return !m_spilledTmps.isEmpty(); }
-
-    Reg allocatedReg(Tmp tmp) const
-    {
-        ASSERT(!tmp.isReg());
-        ASSERT(m_coloredTmp.size());
-        ASSERT(tmp.isGP() == (type == Arg::GP));
-
-        Reg reg = m_coloredTmp[TmpMapper::absoluteIndex(tmp)];
-        if (!reg) {
-            dataLog(&quot;FATAL: No color for &quot;, tmp, &quot;\n&quot;);
-            dataLog(&quot;Code:\n&quot;);
-            dataLog(m_code);
-            RELEASE_ASSERT_NOT_REACHED();
-        }
-        return reg;
-    }
-
-protected:
-    static unsigned tmpArraySize(Code&amp; code)
-    {
-        unsigned numTmps = code.numTmps(type);
-        return TmpMapper::absoluteIndex(numTmps);
-    }
-
-    void initializePrecoloredTmp()
-    {
-        m_coloredTmp.resize(m_lastPrecoloredRegisterIndex + 1);
-        for (unsigned i = 1; i &lt;= m_lastPrecoloredRegisterIndex; ++i) {
-            Tmp tmp = TmpMapper::tmpFromAbsoluteIndex(i);
-            ASSERT(tmp.isReg());
-            m_coloredTmp[i] = tmp.reg();
-        }
-    }
-
-    bool mayBeCoalesced(Arg left, Arg right)
-    {
-        if (!left.isTmp() || !right.isTmp())
-            return false;
-
-        Tmp leftTmp = left.tmp();
-        Tmp rightTmp = right.tmp();
-
-        if (leftTmp == rightTmp)
-            return false;
-
-        if (leftTmp.isGP() != (type == Arg::GP) || rightTmp.isGP() != (type == Arg::GP))
-            return false;
-
-        unsigned leftIndex = TmpMapper::absoluteIndex(leftTmp);
-        unsigned rightIndex = TmpMapper::absoluteIndex(rightTmp);
-
-        return !hasInterferenceEdge(InterferenceEdge(leftIndex, rightIndex));
-    }
-
-    void addToLowPriorityCoalescingCandidates(Arg left, Arg right)
-    {
-        ASSERT(mayBeCoalesced(left, right));
-        Tmp leftTmp = left.tmp();
-        Tmp rightTmp = right.tmp();
-
-        unsigned leftIndex = TmpMapper::absoluteIndex(leftTmp);
-        unsigned rightIndex = TmpMapper::absoluteIndex(rightTmp);
-
-        unsigned nextMoveIndex = m_coalescingCandidates.size();
-        m_coalescingCandidates.append({ leftIndex, rightIndex });
-
-        unsigned newIndexInWorklist = m_worklistMoves.addLowPriorityMove();
-        ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
-
-        m_moveList[leftIndex].add(nextMoveIndex);
-        m_moveList[rightIndex].add(nextMoveIndex);
-    }
-
-    void build()
-    {
-        m_coalescingCandidates.clear();
-        m_worklistMoves.clear();
-
-        TmpLiveness&lt;type&gt; liveness(m_code);
-        for (BasicBlock* block : m_code) {
-            typename TmpLiveness&lt;type&gt;::LocalCalc localCalc(liveness, block);
-            for (unsigned instIndex = block-&gt;size(); instIndex--;) {
-                Inst&amp; inst = block-&gt;at(instIndex);
-                Inst* nextInst = block-&gt;get(instIndex + 1);
-                build(&amp;inst, nextInst, localCalc);
-                localCalc.execute(instIndex);
-            }
-            build(nullptr, &amp;block-&gt;at(0), localCalc);
-        }
-        buildLowPriorityMoveList();
-    }
-
-    void build(Inst* prevInst, Inst* nextInst, const typename TmpLiveness&lt;type&gt;::LocalCalc&amp; localCalc)
-    {
-        if (traceDebug)
-            dataLog(&quot;Building between &quot;, pointerDump(prevInst), &quot; and &quot;, pointerDump(nextInst), &quot;:\n&quot;);
-
-        Inst::forEachDefWithExtraClobberedRegs&lt;Tmp&gt;(
-            prevInst, nextInst,
-            [&amp;] (const Tmp&amp; arg, Arg::Role, Arg::Type argType, Arg::Width) {
-                if (argType != type)
-                    return;
-                
-                // All the Def()s interfere with each other and with all the extra clobbered Tmps.
-                // We should not use forEachDefWithExtraClobberedRegs() here since colored Tmps
-                // do not need interference edges in our implementation.
-                Inst::forEachDef&lt;Tmp&gt;(
-                    prevInst, nextInst,
-                    [&amp;] (Tmp&amp; otherArg, Arg::Role, Arg::Type argType, Arg::Width) {
-                        if (argType != type)
-                            return;
-                        
-                        if (traceDebug)
-                            dataLog(&quot;    Adding def-def edge: &quot;, arg, &quot;, &quot;, otherArg, &quot;\n&quot;);
-                        this-&gt;addEdge(arg, otherArg);
-                    });
-            });
-
-        if (prevInst &amp;&amp; mayBeCoalescable(*prevInst)) {
-            // We do not want the Use() of this move to interfere with the Def(), even if it is live
-            // after the Move. If we were to add the interference edge, it would be impossible to
-            // coalesce the Move even if the two Tmp never interfere anywhere.
-            Tmp defTmp;
-            Tmp useTmp;
-            prevInst-&gt;forEachTmp([&amp;defTmp, &amp;useTmp] (Tmp&amp; argTmp, Arg::Role role, Arg::Type, Arg::Width) {
-                if (Arg::isLateDef(role))
-                    defTmp = argTmp;
-                else {
-                    ASSERT(Arg::isEarlyUse(role));
-                    useTmp = argTmp;
-                }
-            });
-            ASSERT(defTmp);
-            ASSERT(useTmp);
-
-            unsigned nextMoveIndex = m_coalescingCandidates.size();
-            m_coalescingCandidates.append({ TmpMapper::absoluteIndex(useTmp), TmpMapper::absoluteIndex(defTmp) });
-            if (traceDebug)
-                dataLogLn(&quot;Move at index &quot;, nextMoveIndex, &quot; is: &quot;, *prevInst);
-
-            unsigned newIndexInWorklist = m_worklistMoves.addMove();
-            ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
-
-            for (const Arg&amp; arg : prevInst-&gt;args) {
-                auto&amp; list = m_moveList[TmpMapper::absoluteIndex(arg.tmp())];
-                list.add(nextMoveIndex);
-            }
-
-            for (const Tmp&amp; liveTmp : localCalc.live()) {
-                if (liveTmp != useTmp) {
-                    if (traceDebug)
-                        dataLog(&quot;    Adding def-live for coalescable: &quot;, defTmp, &quot;, &quot;, liveTmp, &quot;\n&quot;);
-                    addEdge(defTmp, liveTmp);
-                }
-            }
-
-            // The next instruction could have early clobbers or early def's. We need to consider
-            // those now.
-            addEdges(nullptr, nextInst, localCalc.live());
-        } else
-            addEdges(prevInst, nextInst, localCalc.live());
-    }
-
-    void buildLowPriorityMoveList()
-    {
-        if (!isX86())
-            return;
-
-        m_worklistMoves.startAddingLowPriorityMoves();
-        for (BasicBlock* block : m_code) {
-            for (Inst&amp; inst : *block) {
-                if (std::optional&lt;unsigned&gt; defArgIndex = inst.shouldTryAliasingDef()) {
-                    Arg op1 = inst.args[*defArgIndex - 2];
-                    Arg op2 = inst.args[*defArgIndex - 1];
-                    Arg dest = inst.args[*defArgIndex];
-
-                    if (op1 == dest || op2 == dest)
-                        continue;
-
-                    if (mayBeCoalesced(op1, dest))
-                        addToLowPriorityCoalescingCandidates(op1, dest);
-                    if (op1 != op2 &amp;&amp; mayBeCoalesced(op2, dest))
-                        addToLowPriorityCoalescingCandidates(op2, dest);
-                }
-            }
-        }
-    }
-
-    void addEdges(Inst* prevInst, Inst* nextInst, typename TmpLiveness&lt;type&gt;::LocalCalc::Iterable liveTmps)
-    {
-        // All the Def()s interfere with everthing live.
-        Inst::forEachDefWithExtraClobberedRegs&lt;Tmp&gt;(
-            prevInst, nextInst,
-            [&amp;] (const Tmp&amp; arg, Arg::Role, Arg::Type argType, Arg::Width) {
-                if (argType != type)
-                    return;
-                
-                for (const Tmp&amp; liveTmp : liveTmps) {
-                    ASSERT(liveTmp.isGP() == (type == Arg::GP));
-                    
-                    if (traceDebug)
-                        dataLog(&quot;    Adding def-live edge: &quot;, arg, &quot;, &quot;, liveTmp, &quot;\n&quot;);
-                    
-                    addEdge(arg, liveTmp);
-                }
-
-                if (type == Arg::GP &amp;&amp; !arg.isGPR())
-                    m_interferesWithFramePointer.quickSet(TmpMapper::absoluteIndex(arg));
-            });
-    }
-
-    void addEdge(Tmp a, Tmp b)
-    {
-        ASSERT_WITH_MESSAGE(a.isGP() == b.isGP(), &quot;An interference between registers of different types does not make sense, it can lead to non-colorable graphs.&quot;);
-
-        addEdge(TmpMapper::absoluteIndex(a), TmpMapper::absoluteIndex(b));
-    }
-
-    // Calling this without a tmpWidth will perform a more conservative coalescing analysis that assumes
-    // that Move32's are not coalescable.
-    static bool mayBeCoalescableImpl(const Inst&amp; inst, TmpWidth* tmpWidth)
-    {
-        switch (type) {
-        case Arg::GP:
-            switch (inst.kind.opcode) {
-            case Move:
-            case Move32:
-                break;
-            default:
-                return false;
-            }
-            break;
-        case Arg::FP:
-            switch (inst.kind.opcode) {
-            case MoveFloat:
-            case MoveDouble:
-                break;
-            default:
-                return false;
-            }
-            break;
-        }
-
-        ASSERT_WITH_MESSAGE(inst.args.size() == 2, &quot;We assume coalecable moves only have two arguments in a few places.&quot;);
-
-        if (!inst.args[0].isTmp() || !inst.args[1].isTmp())
-            return false;
-
-        ASSERT(inst.args[0].type() == type);
-        ASSERT(inst.args[1].type() == type);
-
-        // We can coalesce a Move32 so long as either of the following holds:
-        // - The input is already zero-filled.
-        // - The output only cares about the low 32 bits.
-        //
-        // Note that the input property requires an analysis over ZDef's, so it's only valid so long
-        // as the input gets a register. We don't know if the input gets a register, but we do know
-        // that if it doesn't get a register then we will still emit this Move32.
-        if (inst.kind.opcode == Move32) {
-            if (!tmpWidth)
-                return false;
-
-            if (tmpWidth-&gt;defWidth(inst.args[0].tmp()) &gt; Arg::Width32
-                &amp;&amp; tmpWidth-&gt;useWidth(inst.args[1].tmp()) &gt; Arg::Width32)
-                return false;
-        }
-        
-        return true;
-    }
-
-    TmpWidth&amp; m_tmpWidth;
-};
-
-class GraphColoringRegisterAllocation {
-public:
-    GraphColoringRegisterAllocation(Code&amp; code, UseCounts&lt;Tmp&gt;&amp; useCounts)
-        : m_code(code)
-        , m_useCounts(useCounts)
-    {
-    }
-
-    void run()
-    {
-        padInterference(m_code);
-
-        allocateOnType&lt;Arg::GP&gt;();
-        m_numIterations = 0;
-        allocateOnType&lt;Arg::FP&gt;();
-
-        fixSpillsAfterTerminals();
-
-        if (reportStats)
-            dataLog(&quot;Num iterations = &quot;, m_numIterations, &quot;\n&quot;);
-    }
-
-private:
-    template&lt;Arg::Type type&gt;
-    void allocateOnType()
-    {
-        HashSet&lt;unsigned&gt; unspillableTmps = computeUnspillableTmps&lt;type&gt;();
-
-        // FIXME: If a Tmp is used only from a Scratch role and that argument is !admitsStack, then
-        // we should add the Tmp to unspillableTmps. That will help avoid relooping only to turn the
-        // Tmp into an unspillable Tmp.
-        // https://bugs.webkit.org/show_bug.cgi?id=152699
-        
-        while (true) {
-            ++m_numIterations;
-
-            if (traceDebug)
-                dataLog(&quot;Code at iteration &quot;, m_numIterations, &quot;:\n&quot;, m_code);
-
-            // FIXME: One way to optimize this code is to remove the recomputation inside the fixpoint.
-            // We need to recompute because spilling adds tmps, but we could just update tmpWidth when we
-            // add those tmps. Note that one easy way to remove the recomputation is to make any newly
-            // added Tmps get the same use/def widths that the original Tmp got. But, this may hurt the
-            // spill code we emit. Since we currently recompute TmpWidth after spilling, the newly
-            // created Tmps may get narrower use/def widths. On the other hand, the spiller already
-            // selects which move instruction to use based on the original Tmp's widths, so it may not
-            // matter than a subsequent iteration sees a coservative width for the new Tmps. Also, the
-            // recomputation may not actually be a performance problem; it's likely that a better way to
-            // improve performance of TmpWidth is to replace its HashMap with something else. It's
-            // possible that most of the TmpWidth overhead is from queries of TmpWidth rather than the
-            // recomputation, in which case speeding up the lookup would be a bigger win.
-            // https://bugs.webkit.org/show_bug.cgi?id=152478
-            m_tmpWidth.recompute(m_code);
-
-            auto doAllocation = [&amp;] (auto&amp; allocator) -&gt; bool {
-                allocator.allocate();
-                if (!allocator.requiresSpilling()) {
-                    this-&gt;assignRegistersToTmp&lt;type&gt;(allocator);
-                    if (traceDebug)
-                        dataLog(&quot;Successfull allocation at iteration &quot;, m_numIterations, &quot;:\n&quot;, m_code);
-
-                    return true;
-                }
-
-                this-&gt;addSpillAndFill&lt;type&gt;(allocator, unspillableTmps);
-                return false;
-            };
-            
-            bool done;
-            if ((isARM64() || Options::airForceBriggsAllocator()) &amp;&amp; !Options::airForceIRCAllocator()) {
-                ColoringAllocator&lt;type, Briggs&gt; allocator(m_code, m_tmpWidth, m_useCounts, unspillableTmps);
-                done = doAllocation(allocator);
-            } else {
-                ColoringAllocator&lt;type, IRC&gt; allocator(m_code, m_tmpWidth, m_useCounts, unspillableTmps);
-                done = doAllocation(allocator);
-            }
-            if (done)
-                return;
-        }
-    }
-
-    template&lt;Arg::Type type&gt;
-    HashSet&lt;unsigned&gt; computeUnspillableTmps()
-    {
-
-        HashSet&lt;unsigned&gt; unspillableTmps;
-
-        struct Range {
-            unsigned first { std::numeric_limits&lt;unsigned&gt;::max() };
-            unsigned last { 0 };
-            unsigned count { 0 };
-            unsigned admitStackCount { 0 };
-        };
-
-        unsigned numTmps = m_code.numTmps(type);
-        unsigned arraySize = AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(numTmps);
-
-        Vector&lt;Range, 0, UnsafeVectorOverflow&gt; ranges;
-        ranges.fill(Range(), arraySize);
-
-        unsigned globalIndex = 0;
-        for (BasicBlock* block : m_code) {
-            for (Inst&amp; inst : *block) {
-                inst.forEachArg([&amp;] (Arg&amp; arg, Arg::Role, Arg::Type argType, Arg::Width) {
-                    if (arg.isTmp() &amp;&amp; inst.admitsStack(arg)) {
-                        if (argType != type)
-                            return;
-
-                        Tmp tmp = arg.tmp();
-                        Range&amp; range = ranges[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
-                        range.count++;
-                        range.admitStackCount++;
-                        if (globalIndex &lt; range.first) {
-                            range.first = globalIndex;
-                            range.last = globalIndex;
-                        } else
-                            range.last = globalIndex;
-
-                        return;
-                    }
-
-                    arg.forEachTmpFast([&amp;] (Tmp&amp; tmp) {
-                        if (tmp.isGP() != (type == Arg::GP))
-                            return;
-
-                        Range&amp; range = ranges[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
-                        range.count++;
-                        if (globalIndex &lt; range.first) {
-                            range.first = globalIndex;
-                            range.last = globalIndex;
-                        } else
-                            range.last = globalIndex;
-                    });
-                });
-
-                ++globalIndex;
-            }
-            ++globalIndex;
-        }
-        for (unsigned i = AbsoluteTmpMapper&lt;type&gt;::lastMachineRegisterIndex() + 1; i &lt; ranges.size(); ++i) {
-            Range&amp; range = ranges[i];
-            if (range.last - range.first &lt;= 1 &amp;&amp; range.count &gt; range.admitStackCount)
-                unspillableTmps.add(i);
-        }
-
-        return unspillableTmps;
-    }
-
-    template&lt;Arg::Type type, typename AllocatorType&gt;
-    void assignRegistersToTmp(const AllocatorType&amp; allocator)
-    {
-        for (BasicBlock* block : m_code) {
-            // Give Tmp a valid register.
-            for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
-                Inst&amp; inst = block-&gt;at(instIndex);
-
-                // The mayBeCoalescable() method will change its mind for some operations after we
-                // complete register allocation. So, we record this before starting.
-                bool mayBeCoalescable = allocator.mayBeCoalescable(inst);
-
-                // Move32 is cheaper if we know that it's equivalent to a Move. It's
-                // equivalent if the destination's high bits are not observable or if the source's high
-                // bits are all zero. Note that we don't have the opposite optimization for other
-                // architectures, which may prefer Move over Move32, because Move is canonical already.
-                if (type == Arg::GP &amp;&amp; inst.kind.opcode == Move
-                    &amp;&amp; inst.args[0].isTmp() &amp;&amp; inst.args[1].isTmp()) {
-                    if (m_tmpWidth.useWidth(inst.args[1].tmp()) &lt;= Arg::Width32
-                        || m_tmpWidth.defWidth(inst.args[0].tmp()) &lt;= Arg::Width32)
-                        inst.kind.opcode = Move32;
-                }
-
-                inst.forEachTmpFast([&amp;] (Tmp&amp; tmp) {
-                    if (tmp.isReg() || tmp.isGP() == (type != Arg::GP))
-                        return;
-
-                    Tmp aliasTmp = allocator.getAlias(tmp);
-                    Tmp assignedTmp;
-                    if (aliasTmp.isReg())
-                        assignedTmp = Tmp(aliasTmp.reg());
-                    else {
-                        auto reg = allocator.allocatedReg(aliasTmp);
-                        ASSERT(reg);
-                        assignedTmp = Tmp(reg);
-                    }
-                    ASSERT(assignedTmp.isReg());
-                    tmp = assignedTmp;
-                });
-
-                if (mayBeCoalescable &amp;&amp; inst.args[0].isTmp() &amp;&amp; inst.args[1].isTmp()
-                    &amp;&amp; inst.args[0].tmp() == inst.args[1].tmp())
-                    inst = Inst();
-            }
-
-            // Remove all the useless moves we created in this block.
-            block-&gt;insts().removeAllMatching([&amp;] (const Inst&amp; inst) {
-                return !inst;
-            });
-        }
-    }
-
-    static unsigned stackSlotMinimumWidth(Arg::Width width)
-    {
-        return width &lt;= Arg::Width32 ? 4 : 8;
-    }
-
-    template&lt;Arg::Type type, typename AllocatorType&gt;
-    void addSpillAndFill(const AllocatorType&amp; allocator, HashSet&lt;unsigned&gt;&amp; unspillableTmps)
-    {
-        HashMap&lt;Tmp, StackSlot*&gt; stackSlots;
-        for (Tmp tmp : allocator.spilledTmps()) {
-            // All the spilled values become unspillable.
-            unspillableTmps.add(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp));
-
-            // Allocate stack slot for each spilled value.
-            StackSlot* stackSlot = m_code.addStackSlot(
-                stackSlotMinimumWidth(m_tmpWidth.requiredWidth(tmp)), StackSlotKind::Spill);
-            bool isNewTmp = stackSlots.add(tmp, stackSlot).isNewEntry;
-            ASSERT_UNUSED(isNewTmp, isNewTmp);
-        }
-
-        // Rewrite the program to get rid of the spilled Tmp.
-        InsertionSet insertionSet(m_code);
-        for (BasicBlock* block : m_code) {
-            bool hasAliasedTmps = false;
-
-            for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
-                Inst&amp; inst = block-&gt;at(instIndex);
-
-                // The TmpWidth analysis will say that a Move only stores 32 bits into the destination,
-                // if the source only had 32 bits worth of non-zero bits. Same for the source: it will
-                // only claim to read 32 bits from the source if only 32 bits of the destination are
-                // read. Note that we only apply this logic if this turns into a load or store, since
-                // Move is the canonical way to move data between GPRs.
-                bool canUseMove32IfDidSpill = false;
-                bool didSpill = false;
-                if (type == Arg::GP &amp;&amp; inst.kind.opcode == Move) {
-                    if ((inst.args[0].isTmp() &amp;&amp; m_tmpWidth.width(inst.args[0].tmp()) &lt;= Arg::Width32)
-                        || (inst.args[1].isTmp() &amp;&amp; m_tmpWidth.width(inst.args[1].tmp()) &lt;= Arg::Width32))
-                        canUseMove32IfDidSpill = true;
-                }
-
-                // Try to replace the register use by memory use when possible.
-                inst.forEachArg(
-                    [&amp;] (Arg&amp; arg, Arg::Role role, Arg::Type argType, Arg::Width width) {
-                        if (!arg.isTmp())
-                            return;
-                        if (argType != type)
-                            return;
-                        if (arg.isReg())
-                            return;
-                        
-                        auto stackSlotEntry = stackSlots.find(arg.tmp());
-                        if (stackSlotEntry == stackSlots.end())
-                            return;
-                        if (!inst.admitsStack(arg))
-                            return;
-                        
-                        // If the Tmp holds a constant then we want to rematerialize its
-                        // value rather than loading it from the stack. In order for that
-                        // optimization to kick in, we need to avoid placing the Tmp's stack
-                        // address into the instruction.
-                        if (!Arg::isColdUse(role)) {
-                            const UseCounts&lt;Tmp&gt;::Counts* counts = m_useCounts[arg.tmp()];
-                            if (counts &amp;&amp; counts-&gt;numConstDefs == 1 &amp;&amp; counts-&gt;numDefs == 1)
-                                return;
-                        }
-                        
-                        Arg::Width spillWidth = m_tmpWidth.requiredWidth(arg.tmp());
-                        if (Arg::isAnyDef(role) &amp;&amp; width &lt; spillWidth)
-                            return;
-                        ASSERT(inst.kind.opcode == Move || !(Arg::isAnyUse(role) &amp;&amp; width &gt; spillWidth));
-                        
-                        if (spillWidth != Arg::Width32)
-                            canUseMove32IfDidSpill = false;
-                        
-                        stackSlotEntry-&gt;value-&gt;ensureSize(
-                            canUseMove32IfDidSpill ? 4 : Arg::bytes(width));
-                        arg = Arg::stack(stackSlotEntry-&gt;value);
-                        didSpill = true;
-                    });
-
-                if (didSpill &amp;&amp; canUseMove32IfDidSpill)
-                    inst.kind.opcode = Move32;
-
-                // For every other case, add Load/Store as needed.
-                inst.forEachTmp([&amp;] (Tmp&amp; tmp, Arg::Role role, Arg::Type argType, Arg::Width) {
-                    if (tmp.isReg() || argType != type)
-                        return;
-
-                    auto stackSlotEntry = stackSlots.find(tmp);
-                    if (stackSlotEntry == stackSlots.end()) {
-                        Tmp alias = allocator.getAliasWhenSpilling(tmp);
-                        if (alias != tmp) {
-                            tmp = alias;
-                            hasAliasedTmps = true;
-                        }
-                        return;
-                    }
-
-                    Arg::Width spillWidth = m_tmpWidth.requiredWidth(tmp);
-                    Opcode move = Oops;
-                    switch (stackSlotMinimumWidth(spillWidth)) {
-                    case 4:
-                        move = type == Arg::GP ? Move32 : MoveFloat;
-                        break;
-                    case 8:
-                        move = type == Arg::GP ? Move : MoveDouble;
-                        break;
-                    default:
-                        RELEASE_ASSERT_NOT_REACHED();
-                        break;
-                    }
-
-                    tmp = m_code.newTmp(type);
-                    unspillableTmps.add(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp));
-
-                    Arg arg = Arg::stack(stackSlotEntry-&gt;value);
-                    if (Arg::isAnyUse(role) &amp;&amp; role != Arg::Scratch)
-                        insertionSet.insert(instIndex, move, inst.origin, arg, tmp);
-                    if (Arg::isAnyDef(role))
-                        insertionSet.insert(instIndex + 1, move, inst.origin, tmp, arg);
-                });
-            }
-            insertionSet.execute(block);
-
-            if (hasAliasedTmps) {
-                block-&gt;insts().removeAllMatching([&amp;] (const Inst&amp; inst) {
-                    return allocator.isUselessMove(inst);
-                });
-            }
-        }
-    }
-
-    void fixSpillsAfterTerminals()
-    {
-        // Because there may be terminals that produce values, IRC may
-        // want to spill those terminals. It'll happen to spill it after
-        // the terminal. If we left the graph in this state, it'd be invalid
-        // because a terminal must be the last instruction in a block.
-        // We fix that here.
-
-        InsertionSet insertionSet(m_code);
-
-        bool addedBlocks = false;
-
-        for (BasicBlock* block : m_code) {
-            unsigned terminalIndex = block-&gt;size();
-            bool foundTerminal = false;
-            while (terminalIndex--) {
-                if (block-&gt;at(terminalIndex).isTerminal()) {
-                    foundTerminal = true;
-                    break;
-                }
-            }
-            ASSERT_UNUSED(foundTerminal, foundTerminal);
-
-            if (terminalIndex == block-&gt;size() - 1)
-                continue;
-
-            // There must be instructions after the terminal because it's not the last instruction.
-            ASSERT(terminalIndex &lt; block-&gt;size() - 1);
-            Vector&lt;Inst, 1&gt; instsToMove;
-            for (unsigned i = terminalIndex + 1; i &lt; block-&gt;size(); i++)
-                instsToMove.append(block-&gt;at(i));
-            RELEASE_ASSERT(instsToMove.size());
-
-            for (FrequentedBlock&amp; frequentedSuccessor : block-&gt;successors()) {
-                BasicBlock* successor = frequentedSuccessor.block();
-                // If successor's only predecessor is block, we can plant the spill inside
-                // the successor. Otherwise, we must split the critical edge and create
-                // a new block for the spill.
-                if (successor-&gt;numPredecessors() == 1) {
-                    insertionSet.insertInsts(0, instsToMove);
-                    insertionSet.execute(successor);
-                } else {
-                    addedBlocks = true;
-                    // FIXME: We probably want better block ordering here.
-                    BasicBlock* newBlock = m_code.addBlock();
-                    for (const Inst&amp; inst : instsToMove)
-                        newBlock-&gt;appendInst(inst);
-                    newBlock-&gt;appendInst(Inst(Jump, instsToMove.last().origin));
-                    newBlock-&gt;successors().append(successor);
-                    frequentedSuccessor.block() = newBlock;
-                }
-            }
-
-            block-&gt;resize(terminalIndex + 1);
-        }
-
-        if (addedBlocks)
-            m_code.resetReachability();
-    }
-
-    Code&amp; m_code;
-    TmpWidth m_tmpWidth;
-    UseCounts&lt;Tmp&gt;&amp; m_useCounts;
-    unsigned m_numIterations { 0 };
-};
-
-} // anonymous namespace
-
-void allocateRegistersByGraphColoring(Code&amp; code)
-{
-    PhaseScope phaseScope(code, &quot;Air::allocateRegistersByGraphColoring&quot;);
-
-    UseCounts&lt;Tmp&gt; useCounts(code);
-    GraphColoringRegisterAllocation graphColoringRegisterAllocation(code, useCounts);
-    graphColoringRegisterAllocation.run();
-}
-
-} } } // namespace JSC::B3::Air
-
-#endif // ENABLE(B3_JIT)
</del></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirGraphColoringh"></a>
<div class="delfile"><h4>Deleted: trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.h (212812 => 212813)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.h        2017-02-22 07:59:59 UTC (rev 212812)
+++ trunk/Source/JavaScriptCore/b3/air/AirGraphColoring.h        2017-02-22 08:04:18 UTC (rev 212813)
</span><span class="lines">@@ -1,47 +0,0 @@
</span><del>-/*
- * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#pragma once
-
-#if ENABLE(B3_JIT)
-
-namespace JSC { namespace B3 { namespace Air {
-
-class Code;
-
-// We have two register allocators, both fundamentally derived from Chaitin's Yorktown
-// allocator:
-// http://cs.gmu.edu/~white/CS640/p98-chaitin.pdf
-//
-// We have an implementation of Briggs's optimistic allocator which is derivative of Chaitin's allocator:
-// http://www.cs.utexas.edu/users/mckinley/380C/lecs/briggs-thesis-1992.pdf
-//
-// And an implementation of Andrew Appel's Iterated Register Coalescing which is derivative of Briggs's allocator.
-// http://www.cs.cmu.edu/afs/cs/academic/class/15745-s07/www/papers/george.pdf
-void allocateRegistersByGraphColoring(Code&amp;);
-
-} } } // namespace JSC::B3::Air
-
-#endif // ENABLE(B3_JIT)
</del></span></pre>
</div>
</div>

</body>
</html>