<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[194316] trunk/Source/JavaScriptCore</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/194316">194316</a></dd>
<dt>Author</dt> <dd>fpizlo@apple.com</dd>
<dt>Date</dt> <dd>2015-12-19 13:04:55 -0800 (Sat, 19 Dec 2015)</dd>
</dl>

<h3>Log Message</h3>
<pre>[JSC] Streamline Tmp indexing inside the register allocator
https://bugs.webkit.org/show_bug.cgi?id=152420

Patch by Benjamin Poulain &lt;bpoulain@apple.com&gt; on 2015-12-19
Reviewed by Filip Pizlo.

AirIteratedRegisterCoalescing has been accumulating a bit of mess over time.

When it started, every map addressed by Tmp was using Tmp hashing.
That caused massive performance problems. Everything perf sensitive was moved
to direct array addressing by the absolute Tmp index. This left the code
with half of the function using Tmp, the other half using indices.

With this patch, almost everything is moved to absolute indexing.
There are a few advantages to this:
-No more conversion churn for Floating Point registers.
-Most of the functions can now be shared between GP and FP.
-A bit of clean up since the core algorithm only deals with integers now.

This patch also changes the index type to be a template argument.
That will allow future specialization of &quot;m_interferenceEdges&quot; based
on the expected problem size.

Finally, the code related to the program modification (register assignment
and spilling) was moved to the wrapper &quot;IteratedRegisterCoalescing&quot;.

The current split is:
-AbstractColoringAllocator: common core. Share as much as possible between
 GP and FP.
-ColoringAllocator: the remaining parts of the algorithm, everything that
 is specific to GP, FP.
-IteratedRegisterCoalescing: the &quot;iterated&quot; part of the algorithm.
 Try to allocate and modify the code as needed.

The long term plan is:
-Move selectSpill() and the coloring loop to AbstractColoringAllocator.
-Specialize m_interferenceEdges to make it faster.

* b3/air/AirIteratedRegisterCoalescing.cpp:
* b3/air/AirTmpInlines.h:
(JSC::B3::Air::AbsoluteTmpMapper&lt;Arg::GP&gt;::lastMachineRegisterIndex):
(JSC::B3::Air::AbsoluteTmpMapper&lt;Arg::FP&gt;::lastMachineRegisterIndex):</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreChangeLog">trunk/Source/JavaScriptCore/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreb3airAirIteratedRegisterCoalescingcpp">trunk/Source/JavaScriptCore/b3/air/AirIteratedRegisterCoalescing.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreb3airAirTmpInlinesh">trunk/Source/JavaScriptCore/b3/air/AirTmpInlines.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceJavaScriptCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ChangeLog (194315 => 194316)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ChangeLog        2015-12-19 19:01:50 UTC (rev 194315)
+++ trunk/Source/JavaScriptCore/ChangeLog        2015-12-19 21:04:55 UTC (rev 194316)
</span><span class="lines">@@ -1,5 +1,49 @@
</span><span class="cx"> 2015-12-19  Benjamin Poulain  &lt;bpoulain@apple.com&gt;
</span><span class="cx"> 
</span><ins>+        [JSC] Streamline Tmp indexing inside the register allocator
+        https://bugs.webkit.org/show_bug.cgi?id=152420
+
+        Reviewed by Filip Pizlo.
+
+        AirIteratedRegisterCoalescing has been accumulating a bit of mess over time.
+
+        When it started, every map addressed by Tmp was using Tmp hashing.
+        That caused massive performance problems. Everything perf sensitive was moved
+        to direct array addressing by the absolute Tmp index. This left the code
+        with half of the function using Tmp, the other half using indices.
+
+        With this patch, almost everything is moved to absolute indexing.
+        There are a few advantages to this:
+        -No more conversion churn for Floating Point registers.
+        -Most of the functions can now be shared between GP and FP.
+        -A bit of clean up since the core algorithm only deals with integers now.
+
+        This patch also changes the index type to be a template argument.
+        That will allow future specialization of &quot;m_interferenceEdges&quot; based
+        on the expected problem size.
+
+        Finally, the code related to the program modification (register assignment
+        and spilling) was moved to the wrapper &quot;IteratedRegisterCoalescing&quot;.
+
+        The current split is:
+        -AbstractColoringAllocator: common core. Share as much as possible between
+         GP and FP.
+        -ColoringAllocator: the remaining parts of the algorithm, everything that
+         is specific to GP, FP.
+        -IteratedRegisterCoalescing: the &quot;iterated&quot; part of the algorithm.
+         Try to allocate and modify the code as needed.
+
+        The long term plan is:
+        -Move selectSpill() and the coloring loop to AbstractColoringAllocator.
+        -Specialize m_interferenceEdges to make it faster.
+
+        * b3/air/AirIteratedRegisterCoalescing.cpp:
+        * b3/air/AirTmpInlines.h:
+        (JSC::B3::Air::AbsoluteTmpMapper&lt;Arg::GP&gt;::lastMachineRegisterIndex):
+        (JSC::B3::Air::AbsoluteTmpMapper&lt;Arg::FP&gt;::lastMachineRegisterIndex):
+
+2015-12-19  Benjamin Poulain  &lt;bpoulain@apple.com&gt;
+
</ins><span class="cx">         [JSC] FTLB3Output generates some invalid ZExt32
</span><span class="cx">         https://bugs.webkit.org/show_bug.cgi?id=151905
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirIteratedRegisterCoalescingcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/b3/air/AirIteratedRegisterCoalescing.cpp (194315 => 194316)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirIteratedRegisterCoalescing.cpp        2015-12-19 19:01:50 UTC (rev 194315)
+++ trunk/Source/JavaScriptCore/b3/air/AirIteratedRegisterCoalescing.cpp        2015-12-19 21:04:55 UTC (rev 194316)
</span><span class="lines">@@ -47,336 +47,97 @@
</span><span class="cx"> bool traceDebug = false;
</span><span class="cx"> bool reportStats = false;
</span><span class="cx"> 
</span><del>-template&lt;Arg::Type type&gt;
-struct MoveInstHelper;
-
-template&lt;&gt;
-struct MoveInstHelper&lt;Arg::GP&gt; {
-    static bool mayBeCoalescable(const Inst&amp; inst)
-    {
-        bool isMove = inst.opcode == Move;
-        if (!isMove)
-            return false;
-
-        ASSERT_WITH_MESSAGE(inst.args.size() == 2, &quot;We assume coalecable moves only have two arguments in a few places.&quot;);
-        ASSERT(inst.args[0].isType(Arg::GP));
-        ASSERT(inst.args[1].isType(Arg::GP));
-
-        return inst.args[0].isTmp() &amp;&amp; inst.args[1].isTmp();
-    }
-};
-
-template&lt;&gt;
-struct MoveInstHelper&lt;Arg::FP&gt; {
-    static bool mayBeCoalescable(const Inst&amp; inst)
-    {
-        if (inst.opcode != MoveDouble)
-            return false;
-
-        ASSERT_WITH_MESSAGE(inst.args.size() == 2, &quot;We assume coalecable moves only have two arguments in a few places.&quot;);
-        ASSERT(inst.args[0].isType(Arg::FP));
-        ASSERT(inst.args[1].isType(Arg::FP));
-
-        return inst.args[0].isTmp() &amp;&amp; inst.args[1].isTmp();
-    }
-};
-
-template&lt;Arg::Type type&gt;
-class IteratedRegisterCoalescingAllocator {
</del><ins>+// The AbstractColoringAllocator defines all the code that is independant
+// from the type or register and can be shared when allocating registers.
+template&lt;typename IndexType&gt;
+class AbstractColoringAllocator {
</ins><span class="cx"> public:
</span><del>-    IteratedRegisterCoalescingAllocator(
-        Code&amp; code, const UseCounts&lt;Tmp&gt;&amp; useCounts, const HashSet&lt;Tmp&gt;&amp; unspillableTmp)
-        : m_code(code)
-        , m_useCounts(useCounts)
-        , m_unspillableTmp(unspillableTmp)
-        , m_numberOfRegisters(regsInPriorityOrder(type).size())
</del><ins>+    AbstractColoringAllocator(const Vector&lt;Reg&gt;&amp; regsInPriorityOrder, IndexType lastPrecoloredRegisterIndex, unsigned tmpArraySize)
+        : m_regsInPriorityOrder(regsInPriorityOrder)
+        , m_lastPrecoloredRegisterIndex(lastPrecoloredRegisterIndex)
</ins><span class="cx">     {
</span><del>-        initializeDegrees();
</del><ins>+        initializeDegrees(tmpArraySize);
</ins><span class="cx"> 
</span><del>-        unsigned tmpArraySize = this-&gt;tmpArraySize();
</del><span class="cx">         m_adjacencyList.resize(tmpArraySize);
</span><span class="cx">         m_moveList.resize(tmpArraySize);
</span><del>-        m_coalescedTmps.resize(tmpArraySize);
</del><ins>+        m_coalescedTmps.fill(0, tmpArraySize);
</ins><span class="cx">         m_isOnSelectStack.ensureSize(tmpArraySize);
</span><del>-
-        build();
-        allocate();
</del><span class="cx">     }
</span><span class="cx"> 
</span><del>-    Tmp getAlias(Tmp tmp) const
</del><ins>+protected:
+    IndexType getAlias(IndexType tmpIndex) const
</ins><span class="cx">     {
</span><del>-        Tmp alias = tmp;
-        while (Tmp nextAlias = m_coalescedTmps[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(alias)])
</del><ins>+        IndexType alias = tmpIndex;
+        while (IndexType nextAlias = m_coalescedTmps[alias])
</ins><span class="cx">             alias = nextAlias;
</span><span class="cx">         return alias;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    Tmp getAliasWhenSpilling(Tmp tmp) const
</del><ins>+    void addEdge(IndexType a, IndexType b)
</ins><span class="cx">     {
</span><del>-        ASSERT_WITH_MESSAGE(!m_spilledTmp.isEmpty(), &quot;This function is only valid for coalescing during spilling.&quot;);
-
-        if (m_coalescedTmpsAtSpill.isEmpty())
-            return tmp;
-
-        Tmp alias = tmp;
-        while (Tmp nextAlias = m_coalescedTmpsAtSpill[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(alias)])
-            alias = nextAlias;
-
-        ASSERT_WITH_MESSAGE(!m_spilledTmp.contains(tmp) || alias == tmp, &quot;The aliases at spill should always be colorable. Something went horribly wrong.&quot;);
-
-        return alias;
-    }
-
-    const HashSet&lt;Tmp&gt;&amp; spilledTmp() const { return m_spilledTmp; }
-    Reg allocatedReg(Tmp tmp) const
-    {
-        ASSERT(!tmp.isReg());
-        ASSERT(m_coloredTmp.size());
-        ASSERT(tmp.isGP() == (type == Arg::GP));
-
-        Reg reg = m_coloredTmp[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
-        if (!reg) {
-            // We only care about Tmps that interfere. A Tmp that never interfere with anything
-            // can take any register.
-            reg = regsInPriorityOrder(type).first();
-        }
-        return reg;
-    }
-
-private:
-    unsigned tmpArraySize()
-    {
-        unsigned numTmps = m_code.numTmps(type);
-        return AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(numTmps);
-    }
-
-    void initializeDegrees()
-    {
-        unsigned tmpArraySize = this-&gt;tmpArraySize();
-        m_degrees.resize(tmpArraySize);
-
-        // All precolored registers have  an &quot;infinite&quot; degree.
-        unsigned firstNonRegIndex = AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(0);
-        for (unsigned i = 0; i &lt; firstNonRegIndex; ++i)
-            m_degrees[i] = std::numeric_limits&lt;unsigned&gt;::max();
-
-        bzero(m_degrees.data() + firstNonRegIndex, (tmpArraySize - firstNonRegIndex) * sizeof(unsigned));
-    }
-
-    void build()
-    {
-        TmpLiveness&lt;type&gt; liveness(m_code);
-        for (BasicBlock* block : m_code) {
-            typename TmpLiveness&lt;type&gt;::LocalCalc localCalc(liveness, block);
-            for (unsigned instIndex = block-&gt;size(); instIndex--;) {
-                Inst&amp; inst = block-&gt;at(instIndex);
-                Inst* nextInst = instIndex + 1 &lt; block-&gt;size() ? &amp;block-&gt;at(instIndex + 1) : nullptr;
-                build(inst, nextInst, localCalc);
-                localCalc.execute(instIndex);
-            }
-        }
-    }
-
-    void build(Inst&amp; inst, Inst* nextInst, const typename TmpLiveness&lt;type&gt;::LocalCalc&amp; localCalc)
-    {
-        inst.forEachTmpWithExtraClobberedRegs(
-            nextInst,
-            [&amp;] (const Tmp&amp; arg, Arg::Role role, Arg::Type argType) {
-                if (!Arg::isDef(role) || argType != type)
-                    return;
-                
-                // All the Def()s interfere with each other and with all the extra clobbered Tmps.
-                // We should not use forEachDefAndExtraClobberedTmp() here since colored Tmps
-                // do not need interference edges in our implementation.
-                inst.forEachTmp([&amp;] (Tmp&amp; otherArg, Arg::Role role, Arg::Type argType) {
-                        if (!Arg::isDef(role) || argType != type)
-                            return;
-                        
-                        addEdge(arg, otherArg);
-                    });
-            });
-
-        if (MoveInstHelper&lt;type&gt;::mayBeCoalescable(inst)) {
-            // We do not want the Use() of this move to interfere with the Def(), even if it is live
-            // after the Move. If we were to add the interference edge, it would be impossible to
-            // coalesce the Move even if the two Tmp never interfere anywhere.
-            Tmp defTmp;
-            Tmp useTmp;
-            inst.forEachTmp([&amp;defTmp, &amp;useTmp] (Tmp&amp; argTmp, Arg::Role role, Arg::Type) {
-                if (Arg::isDef(role))
-                    defTmp = argTmp;
-                else {
-                    ASSERT(Arg::isEarlyUse(role));
-                    useTmp = argTmp;
-                }
-            });
-            ASSERT(defTmp);
-            ASSERT(useTmp);
-
-            unsigned nextMoveIndex = m_coalescingCandidates.size();
-            m_coalescingCandidates.append({ useTmp, defTmp });
-
-            unsigned newIndexInWorklist = m_worklistMoves.addMove();
-            ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
-
-            ASSERT(nextMoveIndex &lt;= m_activeMoves.size());
-            m_activeMoves.ensureSize(nextMoveIndex + 1);
-
-            for (const Arg&amp; arg : inst.args) {
-                auto&amp; list = m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(arg.tmp())];
-                list.add(nextMoveIndex);
-            }
-
-            for (const Tmp&amp; liveTmp : localCalc.live()) {
-                if (liveTmp != useTmp)
-                    addEdge(defTmp, liveTmp);
-            }
-
-            // The next instruction could have early clobbers. We need to consider those now.
-            if (nextInst &amp;&amp; nextInst-&gt;hasSpecial()) {
-                nextInst-&gt;extraEarlyClobberedRegs().forEach(
-                    [&amp;] (Reg reg) {
-                        if (reg.isGPR() == (type == Arg::GP)) {
-                            for (const Tmp&amp; liveTmp : localCalc.live()) {
-                                addEdge(Tmp(reg), liveTmp);
-                            }
-                        }
-                    });
-            }
-        } else
-            addEdges(inst, nextInst, localCalc.live());
-    }
-
-    void addEdges(Inst&amp; inst, Inst* nextInst, typename TmpLiveness&lt;type&gt;::LocalCalc::Iterable liveTmps)
-    {
-        // All the Def()s interfere with everthing live.
-        inst.forEachTmpWithExtraClobberedRegs(
-            nextInst,
-            [&amp;] (const Tmp&amp; arg, Arg::Role role, Arg::Type argType) {
-                if (!Arg::isDef(role) || argType != type)
-                    return;
-                
-                for (const Tmp&amp; liveTmp : liveTmps) {
-                    if (liveTmp.isGP() == (type == Arg::GP))
-                        addEdge(arg, liveTmp);
-                }
-            });
-    }
-
-    void addEdge(const Tmp&amp; a, const Tmp&amp; b)
-    {
</del><span class="cx">         if (a == b)
</span><span class="cx">             return;
</span><del>-
-        if (m_interferenceEdges.add(InterferenceEdge(a, b)).isNewEntry) {
-            if (!a.isReg()) {
-                ASSERT(!m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(a)].contains(b));
-                m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(a)].append(b);
-                m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(a)]++;
-            }
-
-            if (!b.isReg()) {
-                ASSERT(!m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(b)].contains(a));
-                m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(b)].append(a);
-                m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(b)]++;
-            }
-        }
</del><ins>+        addEdgeDistinct(a, b);
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     void makeWorkList()
</span><span class="cx">     {
</span><del>-        unsigned firstNonRegIndex = AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(0);
-        for (unsigned i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
</del><ins>+        IndexType firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        for (IndexType i = firstNonRegIndex; i &lt; m_degrees.size(); ++i) {
</ins><span class="cx">             unsigned degree = m_degrees[i];
</span><span class="cx">             if (!degree)
</span><span class="cx">                 continue;
</span><span class="cx"> 
</span><del>-            Tmp tmp = AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(i);
-
-            if (degree &gt;= m_numberOfRegisters)
-                m_spillWorklist.add(tmp);
-            else if (!m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)].isEmpty())
-                m_freezeWorklist.add(tmp);
</del><ins>+            if (degree &gt;= m_regsInPriorityOrder.size())
+                m_spillWorklist.add(i);
+            else if (!m_moveList[i].isEmpty())
+                m_freezeWorklist.add(i);
</ins><span class="cx">             else
</span><del>-                m_simplifyWorklist.append(tmp);
</del><ins>+                m_simplifyWorklist.append(i);
</ins><span class="cx">         }
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    // Low-degree vertex can always be colored: just pick any of the color taken by any
+    // other adjacent verices.
+    // The &quot;Simplify&quot; phase takes a low-degree out of the interference graph to simplify it.
</ins><span class="cx">     void simplify()
</span><span class="cx">     {
</span><del>-        Tmp last = m_simplifyWorklist.takeLast();
</del><ins>+        IndexType lastIndex = m_simplifyWorklist.takeLast();
</ins><span class="cx"> 
</span><del>-        ASSERT(!m_selectStack.contains(last));
-        ASSERT(!m_isOnSelectStack.get(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(last)));
-        m_selectStack.append(last);
-        m_isOnSelectStack.quickSet(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(last));
</del><ins>+        ASSERT(!m_selectStack.contains(lastIndex));
+        ASSERT(!m_isOnSelectStack.get(lastIndex));
+        m_selectStack.append(lastIndex);
+        m_isOnSelectStack.quickSet(lastIndex);
</ins><span class="cx"> 
</span><del>-        forEachAdjacent(last, [this](Tmp adjacentTmp) {
-            decrementDegree(adjacentTmp);
</del><ins>+        forEachAdjacent(lastIndex, [this](IndexType adjacentTmpIndex) {
+            decrementDegree(adjacentTmpIndex);
</ins><span class="cx">         });
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    template&lt;typename Function&gt;
-    void forEachAdjacent(Tmp tmp, Function function)
</del><ins>+    void freeze()
</ins><span class="cx">     {
</span><del>-        for (Tmp adjacentTmp : m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]) {
-            if (!hasBeenSimplified(adjacentTmp))
-                function(adjacentTmp);
-        }
</del><ins>+        IndexType victimIndex = m_freezeWorklist.takeAny();
+        ASSERT_WITH_MESSAGE(getAlias(victimIndex) == victimIndex, &quot;coalesce() should not leave aliased Tmp in the worklist.&quot;);
+        m_simplifyWorklist.append(victimIndex);
+        freezeMoves(victimIndex);
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    bool hasBeenSimplified(Tmp tmp)
</del><ins>+    void freezeMoves(IndexType tmpIndex)
</ins><span class="cx">     {
</span><del>-        return m_isOnSelectStack.quickGet(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)) || !!m_coalescedTmps[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
-    }
</del><ins>+        forEachNodeMoves(tmpIndex, [this, tmpIndex] (IndexType moveIndex) {
+            if (!m_activeMoves.quickClear(moveIndex))
+                m_worklistMoves.takeMove(moveIndex);
</ins><span class="cx"> 
</span><del>-    void decrementDegree(Tmp tmp)
-    {
-        ASSERT(m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]);
</del><ins>+            const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
+            IndexType srcTmpIndex = moveOperands.srcIndex;
+            IndexType dstTmpIndex = moveOperands.dstIndex;
</ins><span class="cx"> 
</span><del>-        unsigned oldDegree = m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]--;
-        if (oldDegree == m_numberOfRegisters) {
-            enableMovesOnValueAndAdjacents(tmp);
-            m_spillWorklist.remove(tmp);
-            if (isMoveRelated(tmp))
-                m_freezeWorklist.add(tmp);
-            else
-                m_simplifyWorklist.append(tmp);
-        }
-    }
-
-    template&lt;typename Function&gt;
-    void forEachNodeMoves(Tmp tmp, Function function)
-    {
-        for (unsigned moveIndex : m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]) {
-            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
-                function(moveIndex);
-        }
-    }
-
-    bool isMoveRelated(Tmp tmp)
-    {
-        for (unsigned moveIndex : m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]) {
-            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
-                return true;
-        }
-        return false;
-    }
-
-    void enableMovesOnValue(Tmp tmp)
-    {
-        for (unsigned moveIndex : m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]) {
-            if (m_activeMoves.quickClear(moveIndex))
-                m_worklistMoves.returnMove(moveIndex);
-        }
-    }
-
-    void enableMovesOnValueAndAdjacents(Tmp tmp)
-    {
-        enableMovesOnValue(tmp);
-
-        forEachAdjacent(tmp, [this] (Tmp adjacentTmp) {
-            enableMovesOnValue(adjacentTmp);
</del><ins>+            IndexType originalOtherTmp = srcTmpIndex != tmpIndex ? srcTmpIndex : dstTmpIndex;
+            IndexType otherTmpIndex = getAlias(originalOtherTmp);
+            if (m_degrees[otherTmpIndex] &lt; m_regsInPriorityOrder.size() &amp;&amp; !isMoveRelated(otherTmpIndex)) {
+                if (m_freezeWorklist.remove(otherTmpIndex))
+                    m_simplifyWorklist.append(otherTmpIndex);
+            }
</ins><span class="cx">         });
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="lines">@@ -384,12 +145,10 @@
</span><span class="cx">     {
</span><span class="cx">         unsigned moveIndex = m_worklistMoves.takeLastMove();
</span><span class="cx">         const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
</span><del>-        Tmp u = moveOperands.src;
-        u = getAlias(u);
-        Tmp v = moveOperands.dst;
-        v = getAlias(v);
</del><ins>+        IndexType u = getAlias(moveOperands.srcIndex);
+        IndexType v = getAlias(moveOperands.dstIndex);
</ins><span class="cx"> 
</span><del>-        if (v.isReg())
</del><ins>+        if (isPrecolored(v))
</ins><span class="cx">             std::swap(u, v);
</span><span class="cx"> 
</span><span class="cx">         if (traceDebug)
</span><span class="lines">@@ -400,7 +159,7 @@
</span><span class="cx"> 
</span><span class="cx">             if (traceDebug)
</span><span class="cx">                 dataLog(&quot;    Coalesced\n&quot;);
</span><del>-        } else if (v.isReg() || m_interferenceEdges.contains(InterferenceEdge(u, v))) {
</del><ins>+        } else if (isPrecolored(v) || m_interferenceEdges.contains(InterferenceEdge(u, v))) {
</ins><span class="cx">             addWorkList(u);
</span><span class="cx">             addWorkList(v);
</span><span class="cx"> 
</span><span class="lines">@@ -421,316 +180,261 @@
</span><span class="cx">         }
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    bool canBeSafelyCoalesced(Tmp u, Tmp v)
</del><ins>+    void assignColors()
</ins><span class="cx">     {
</span><del>-        ASSERT(!v.isReg());
-        if (u.isReg())
-            return precoloredCoalescingHeuristic(u, v);
-        return conservativeHeuristic(u, v);
-    }
</del><ins>+        ASSERT(m_simplifyWorklist.isEmpty());
+        ASSERT(m_worklistMoves.isEmpty());
+        ASSERT(m_freezeWorklist.isEmpty());
+        ASSERT(m_spillWorklist.isEmpty());
</ins><span class="cx"> 
</span><del>-    bool precoloredCoalescingHeuristic(Tmp u, Tmp v)
-    {
-        ASSERT(u.isReg());
-        ASSERT(!v.isReg());
</del><ins>+        // Reclaim as much memory as possible.
+        m_interferenceEdges.clear();
+        m_degrees.clear();
+        m_moveList.clear();
+        m_worklistMoves.clear();
+        m_simplifyWorklist.clear();
+        m_spillWorklist.clear();
+        m_freezeWorklist.clear();
</ins><span class="cx"> 
</span><del>-        // If any adjacent of the non-colored node is not an adjacent of the colored node AND has a degree &gt;= K
-        // there is a risk that this node needs to have the same color as our precolored node. If we coalesce such
-        // move, we may create an uncolorable graph.
-        const auto&amp; adjacentsOfV = m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(v)];
-        for (Tmp adjacentTmp : adjacentsOfV) {
-            if (!adjacentTmp.isReg()
-                &amp;&amp; !hasBeenSimplified(adjacentTmp)
-                &amp;&amp; m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(adjacentTmp)] &gt;= m_numberOfRegisters
-                &amp;&amp; !m_interferenceEdges.contains(InterferenceEdge(u, adjacentTmp)))
-                return false;
-        }
-        return true;
-    }
</del><ins>+        // Try to color the Tmp on the stack.
+        m_coloredTmp.resize(m_adjacencyList.size());
</ins><span class="cx"> 
</span><del>-    bool conservativeHeuristic(Tmp u, Tmp v)
-    {
-        // This is using the Briggs' conservative coalescing rule:
-        // If the number of combined adjacent node with a degree &gt;= K is less than K,
-        // it is safe to combine the two nodes. The reason is that we know that if the graph
-        // is colorable, we have fewer than K adjacents with high order and there is a color
-        // for the current node.
-        ASSERT(u != v);
-        ASSERT(!u.isReg());
-        ASSERT(!v.isReg());
</del><ins>+        while (!m_selectStack.isEmpty()) {
+            unsigned tmpIndex = m_selectStack.takeLast();
+            ASSERT(!isPrecolored(tmpIndex));
+            ASSERT(!m_coloredTmp[tmpIndex]);
</ins><span class="cx"> 
</span><del>-        const auto&amp; adjacentsOfU = m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(u)];
-        const auto&amp; adjacentsOfV = m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(v)];
</del><ins>+            RegisterSet coloredRegisters;
+            for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
+                IndexType aliasTmpIndex = getAlias(adjacentTmpIndex);
+                Reg reg = m_coloredTmp[aliasTmpIndex];
</ins><span class="cx"> 
</span><del>-        if (adjacentsOfU.size() + adjacentsOfV.size() &lt; m_numberOfRegisters) {
-            // Shortcut: if the total number of adjacents is less than the number of register, the condition is always met.
-            return true;
-        }
</del><ins>+                ASSERT(!isPrecolored(aliasTmpIndex) || (isPrecolored(aliasTmpIndex) &amp;&amp; reg));
</ins><span class="cx"> 
</span><del>-        HashSet&lt;Tmp&gt; highOrderAdjacents;
</del><ins>+                if (reg)
+                    coloredRegisters.set(reg);
+            }
</ins><span class="cx"> 
</span><del>-        for (Tmp adjacentTmp : adjacentsOfU) {
-            ASSERT(adjacentTmp != v);
-            ASSERT(adjacentTmp != u);
-            if (!hasBeenSimplified(adjacentTmp) &amp;&amp; m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(adjacentTmp)] &gt;= m_numberOfRegisters) {
-                auto addResult = highOrderAdjacents.add(adjacentTmp);
-                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= m_numberOfRegisters)
-                    return false;
</del><ins>+            bool colorAssigned = false;
+            for (Reg reg : m_regsInPriorityOrder) {
+                if (!coloredRegisters.get(reg)) {
+                    m_coloredTmp[tmpIndex] = reg;
+                    colorAssigned = true;
+                    break;
+                }
</ins><span class="cx">             }
</span><ins>+
+            if (!colorAssigned)
+                m_spilledTmps.append(tmpIndex);
</ins><span class="cx">         }
</span><del>-        for (Tmp adjacentTmp : adjacentsOfV) {
-            ASSERT(adjacentTmp != u);
-            ASSERT(adjacentTmp != v);
-            if (!hasBeenSimplified(adjacentTmp) &amp;&amp; m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(adjacentTmp)] &gt;= m_numberOfRegisters) {
-                auto addResult = highOrderAdjacents.add(adjacentTmp);
-                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= m_numberOfRegisters)
-                    return false;
-            }
-        }
</del><ins>+        m_selectStack.clear();
</ins><span class="cx"> 
</span><del>-        ASSERT(highOrderAdjacents.size() &lt; m_numberOfRegisters);
-        return true;
</del><ins>+        if (m_spilledTmps.isEmpty())
+            m_coalescedTmpsAtSpill.clear();
+        else
+            m_coloredTmp.clear();
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void addWorkList(Tmp tmp)
</del><ins>+private:
+    void initializeDegrees(unsigned tmpArraySize)
</ins><span class="cx">     {
</span><del>-        if (!tmp.isReg() &amp;&amp; m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)] &lt; m_numberOfRegisters &amp;&amp; !isMoveRelated(tmp)) {
-            m_freezeWorklist.remove(tmp);
-            m_simplifyWorklist.append(tmp);
-        }
</del><ins>+        m_degrees.resize(tmpArraySize);
+
+        // All precolored registers have  an &quot;infinite&quot; degree.
+        unsigned firstNonRegIndex = m_lastPrecoloredRegisterIndex + 1;
+        for (unsigned i = 0; i &lt; firstNonRegIndex; ++i)
+            m_degrees[i] = std::numeric_limits&lt;unsigned&gt;::max();
+
+        bzero(m_degrees.data() + firstNonRegIndex, (tmpArraySize - firstNonRegIndex) * sizeof(unsigned));
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void combine(Tmp u, Tmp v)
</del><ins>+    void addEdgeDistinct(IndexType a, IndexType b)
</ins><span class="cx">     {
</span><del>-        if (!m_freezeWorklist.remove(v))
-            m_spillWorklist.remove(v);
</del><ins>+        ASSERT(a != b);
+        if (m_interferenceEdges.add(InterferenceEdge(a, b)).isNewEntry) {
+            if (!isPrecolored(a)) {
+                ASSERT(!m_adjacencyList[a].contains(b));
+                m_adjacencyList[a].append(b);
+                m_degrees[a]++;
+            }
</ins><span class="cx"> 
</span><del>-        ASSERT(!m_coalescedTmps[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(v)]);
-        m_coalescedTmps[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(v)] = u;
</del><ins>+            if (!isPrecolored(b)) {
+                ASSERT(!m_adjacencyList[b].contains(a));
+                m_adjacencyList[b].append(a);
+                m_degrees[b]++;
+            }
+        }
+    }
</ins><span class="cx"> 
</span><del>-        auto&amp; vMoves = m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(v)];
-        m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(u)].add(vMoves.begin(), vMoves.end());
</del><ins>+    void decrementDegree(IndexType tmpIndex)
+    {
+        ASSERT(m_degrees[tmpIndex]);
</ins><span class="cx"> 
</span><del>-        forEachAdjacent(v, [this, u] (Tmp adjacentTmp) {
-            addEdge(adjacentTmp, u);
-            decrementDegree(adjacentTmp);
-        });
</del><ins>+        unsigned oldDegree = m_degrees[tmpIndex]--;
+        if (oldDegree == m_regsInPriorityOrder.size()) {
+            enableMovesOnValueAndAdjacents(tmpIndex);
+            m_spillWorklist.remove(tmpIndex);
+            if (isMoveRelated(tmpIndex))
+                m_freezeWorklist.add(tmpIndex);
+            else
+                m_simplifyWorklist.append(tmpIndex);
+        }
+    }
</ins><span class="cx"> 
</span><del>-        if (m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(u)] &gt;= m_numberOfRegisters &amp;&amp; m_freezeWorklist.remove(u))
-            m_spillWorklist.add(u);
</del><ins>+    bool isMoveRelated(IndexType tmpIndex)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
+                return true;
+        }
+        return false;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void freeze()
</del><ins>+    template&lt;typename Function&gt;
+    void forEachAdjacent(IndexType tmpIndex, Function function)
</ins><span class="cx">     {
</span><del>-        Tmp victim = m_freezeWorklist.takeAny();
-        ASSERT_WITH_MESSAGE(getAlias(victim) == victim, &quot;coalesce() should not leave aliased Tmp in the worklist.&quot;);
-        m_simplifyWorklist.append(victim);
-        freezeMoves(victim);
</del><ins>+        for (IndexType adjacentTmpIndex : m_adjacencyList[tmpIndex]) {
+            if (!hasBeenSimplified(adjacentTmpIndex))
+                function(adjacentTmpIndex);
+        }
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void freezeMoves(Tmp tmp)
</del><ins>+    bool hasBeenSimplified(IndexType tmpIndex)
</ins><span class="cx">     {
</span><del>-        forEachNodeMoves(tmp, [this, tmp] (unsigned moveIndex) {
-            if (!m_activeMoves.quickClear(moveIndex))
-                m_worklistMoves.takeMove(moveIndex);
</del><ins>+        return m_isOnSelectStack.quickGet(tmpIndex) || !!m_coalescedTmps[tmpIndex];
+    }
</ins><span class="cx"> 
</span><del>-            const MoveOperands&amp; moveOperands = m_coalescingCandidates[moveIndex];
-            Tmp originalOtherTmp = moveOperands.src != tmp ? moveOperands.src : moveOperands.dst;
-            Tmp otherTmp = getAlias(originalOtherTmp);
-            if (m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(otherTmp)] &lt; m_numberOfRegisters &amp;&amp; !isMoveRelated(otherTmp)) {
-                if (m_freezeWorklist.remove(otherTmp))
-                    m_simplifyWorklist.append(otherTmp);
-            }
-        });
</del><ins>+    template&lt;typename Function&gt;
+    void forEachNodeMoves(IndexType tmpIndex, Function function)
+    {
+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickGet(moveIndex) || m_worklistMoves.contains(moveIndex))
+                function(moveIndex);
+        }
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void selectSpill()
</del><ins>+    void enableMovesOnValue(IndexType tmpIndex)
</ins><span class="cx">     {
</span><del>-        if (!m_hasSelectedSpill) {
-            m_hasSelectedSpill = true;
-
-            if (m_hasCoalescedNonTrivialMove)
-                m_coalescedTmpsAtSpill = m_coalescedTmps;
</del><ins>+        for (unsigned moveIndex : m_moveList[tmpIndex]) {
+            if (m_activeMoves.quickClear(moveIndex))
+                m_worklistMoves.returnMove(moveIndex);
</ins><span class="cx">         }
</span><ins>+    }
</ins><span class="cx"> 
</span><del>-        auto iterator = m_spillWorklist.begin();
</del><ins>+    void enableMovesOnValueAndAdjacents(IndexType tmpIndex)
+    {
+        enableMovesOnValue(tmpIndex);
</ins><span class="cx"> 
</span><del>-        while (iterator != m_spillWorklist.end() &amp;&amp; m_unspillableTmp.contains(*iterator))
-            ++iterator;
</del><ins>+        forEachAdjacent(tmpIndex, [this] (IndexType adjacentTmpIndex) {
+            enableMovesOnValue(adjacentTmpIndex);
+        });
+    }
</ins><span class="cx"> 
</span><del>-        RELEASE_ASSERT_WITH_MESSAGE(iterator != m_spillWorklist.end(), &quot;It is not possible to color the Air graph with the number of available registers.&quot;);
</del><ins>+    bool isPrecolored(IndexType tmpIndex)
+    {
+        return tmpIndex &lt;= m_lastPrecoloredRegisterIndex;
+    }
</ins><span class="cx"> 
</span><del>-        // Higher score means more desirable to spill. Lower scores maximize the likelihood that a tmp
-        // gets a register.
-        auto score = [&amp;] (Tmp tmp) -&gt; double {
-            // Air exposes the concept of &quot;fast tmps&quot;, and we interpret that to mean that the tmp
-            // should always be in a register.
-            if (m_code.isFastTmp(tmp))
-                return 0;
-            
-            // All else being equal, the score should be directly related to the degree.
-            double degree = static_cast&lt;double&gt;(m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]);
-
-            // All else being equal, the score should be inversely related to the number of warm uses and
-            // defs.
-            const UseCounts&lt;Tmp&gt;::Counts&amp; counts = m_useCounts[tmp];
-            double uses = counts.numWarmUses + counts.numDefs;
-
-            return degree / uses;
-        };
-
-        auto victimIterator = iterator;
-        double maxScore = score(*iterator);
-
-        ++iterator;
-        for (;iterator != m_spillWorklist.end(); ++iterator) {
-            double tmpScore = score(*iterator);
-            if (tmpScore &gt; maxScore) {
-                if (m_unspillableTmp.contains(*iterator))
-                    continue;
-
-                victimIterator = iterator;
-                maxScore = tmpScore;
-            }
</del><ins>+    void addWorkList(IndexType tmpIndex)
+    {
+        if (!isPrecolored(tmpIndex) &amp;&amp; m_degrees[tmpIndex] &lt; m_regsInPriorityOrder.size() &amp;&amp; !isMoveRelated(tmpIndex)) {
+            m_freezeWorklist.remove(tmpIndex);
+            m_simplifyWorklist.append(tmpIndex);
</ins><span class="cx">         }
</span><del>-
-        Tmp victimTmp = *victimIterator;
-        m_spillWorklist.remove(victimIterator);
-        m_simplifyWorklist.append(victimTmp);
-        freezeMoves(victimTmp);
</del><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void allocate()
</del><ins>+    void combine(IndexType u, IndexType v)
</ins><span class="cx">     {
</span><del>-        ASSERT_WITH_MESSAGE(m_activeMoves.size() &gt;= m_coalescingCandidates.size(), &quot;The activeMove set should be big enough for the quick operations of BitVector.&quot;);
</del><ins>+        if (!m_freezeWorklist.remove(v))
+            m_spillWorklist.remove(v);
</ins><span class="cx"> 
</span><del>-        makeWorkList();
</del><ins>+        ASSERT(!m_coalescedTmps[v]);
+        m_coalescedTmps[v] = u;
</ins><span class="cx"> 
</span><del>-        if (debug) {
-            dataLog(&quot;Interference: &quot;, listDump(m_interferenceEdges), &quot;\n&quot;);
-            dumpInterferenceGraphInDot(WTF::dataFile());
-            dataLog(&quot;Initial work list\n&quot;);
-            dumpWorkLists(WTF::dataFile());
-        }
</del><ins>+        auto&amp; vMoves = m_moveList[v];
+        m_moveList[u].add(vMoves.begin(), vMoves.end());
</ins><span class="cx"> 
</span><del>-        do {
-            if (traceDebug) {
-                dataLog(&quot;Before Graph simplification iteration\n&quot;);
-                dumpWorkLists(WTF::dataFile());
-            }
</del><ins>+        forEachAdjacent(v, [this, u] (IndexType adjacentTmpIndex) {
+            addEdgeDistinct(adjacentTmpIndex, u);
+            decrementDegree(adjacentTmpIndex);
+        });
</ins><span class="cx"> 
</span><del>-            if (!m_simplifyWorklist.isEmpty())
-                simplify();
-            else if (!m_worklistMoves.isEmpty())
-                coalesce();
-            else if (!m_freezeWorklist.isEmpty())
-                freeze();
-            else if (!m_spillWorklist.isEmpty())
-                selectSpill();
</del><ins>+        if (m_degrees[u] &gt;= m_regsInPriorityOrder.size() &amp;&amp; m_freezeWorklist.remove(u))
+            m_spillWorklist.add(u);
+    }
</ins><span class="cx"> 
</span><del>-            if (traceDebug) {
-                dataLog(&quot;After Graph simplification iteration\n&quot;);
-                dumpWorkLists(WTF::dataFile());
-            }
-        } while (!m_simplifyWorklist.isEmpty() || !m_worklistMoves.isEmpty() || !m_freezeWorklist.isEmpty() || !m_spillWorklist.isEmpty());
-
-        assignColors();
</del><ins>+    bool canBeSafelyCoalesced(IndexType u, IndexType v)
+    {
+        ASSERT(!isPrecolored(v));
+        if (isPrecolored(u))
+            return precoloredCoalescingHeuristic(u, v);
+        return conservativeHeuristic(u, v);
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void assignColors()
</del><ins>+    bool conservativeHeuristic(IndexType u, IndexType v)
</ins><span class="cx">     {
</span><del>-        ASSERT(m_simplifyWorklist.isEmpty());
-        ASSERT(m_worklistMoves.isEmpty());
-        ASSERT(m_freezeWorklist.isEmpty());
-        ASSERT(m_spillWorklist.isEmpty());
</del><ins>+        // This is using the Briggs' conservative coalescing rule:
+        // If the number of combined adjacent node with a degree &gt;= K is less than K,
+        // it is safe to combine the two nodes. The reason is that we know that if the graph
+        // is colorable, we have fewer than K adjacents with high order and there is a color
+        // for the current node.
+        ASSERT(u != v);
+        ASSERT(!isPrecolored(u));
+        ASSERT(!isPrecolored(v));
</ins><span class="cx"> 
</span><del>-        // Reclaim as much memory as possible.
-        m_interferenceEdges.clear();
-        m_degrees.clear();
-        m_moveList.clear();
-        m_worklistMoves.clear();
-        m_simplifyWorklist.clear();
-        m_spillWorklist.clear();
-        m_freezeWorklist.clear();
</del><ins>+        const auto&amp; adjacentsOfU = m_adjacencyList[u];
+        const auto&amp; adjacentsOfV = m_adjacencyList[v];
</ins><span class="cx"> 
</span><del>-        // Try to color the Tmp on the stack.
-        m_coloredTmp.resize(m_adjacencyList.size());
-        const auto&amp; registersInPriorityOrder = regsInPriorityOrder(type);
</del><ins>+        if (adjacentsOfU.size() + adjacentsOfV.size() &lt; m_regsInPriorityOrder.size()) {
+            // Shortcut: if the total number of adjacents is less than the number of register, the condition is always met.
+            return true;
+        }
</ins><span class="cx"> 
</span><del>-        while (!m_selectStack.isEmpty()) {
-            Tmp tmp = m_selectStack.takeLast();
-            ASSERT(!tmp.isReg());
-            ASSERT(!m_coloredTmp[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]);
</del><ins>+        HashSet&lt;IndexType&gt; highOrderAdjacents;
</ins><span class="cx"> 
</span><del>-            RegisterSet coloredRegisters;
-            for (Tmp adjacentTmp : m_adjacencyList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]) {
-                Tmp aliasTmp = getAlias(adjacentTmp);
-                if (aliasTmp.isReg()) {
-                    coloredRegisters.set(aliasTmp.reg());
-                    continue;
-                }
-
-                Reg reg = m_coloredTmp[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(aliasTmp)];
-                if (reg)
-                    coloredRegisters.set(reg);
</del><ins>+        for (IndexType adjacentTmpIndex : adjacentsOfU) {
+            ASSERT(adjacentTmpIndex != v);
+            ASSERT(adjacentTmpIndex != u);
+            if (!hasBeenSimplified(adjacentTmpIndex) &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= m_regsInPriorityOrder.size()) {
+                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
+                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= m_regsInPriorityOrder.size())
+                    return false;
</ins><span class="cx">             }
</span><del>-
-            bool colorAssigned = false;
-            for (Reg reg : registersInPriorityOrder) {
-                if (!coloredRegisters.get(reg)) {
-                    m_coloredTmp[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)] = reg;
-                    colorAssigned = true;
-                    break;
-                }
</del><ins>+        }
+        for (IndexType adjacentTmpIndex : adjacentsOfV) {
+            ASSERT(adjacentTmpIndex != u);
+            ASSERT(adjacentTmpIndex != v);
+            if (!hasBeenSimplified(adjacentTmpIndex) &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= m_regsInPriorityOrder.size()) {
+                auto addResult = highOrderAdjacents.add(adjacentTmpIndex);
+                if (addResult.isNewEntry &amp;&amp; highOrderAdjacents.size() &gt;= m_regsInPriorityOrder.size())
+                    return false;
</ins><span class="cx">             }
</span><del>-
-            if (!colorAssigned)
-                m_spilledTmp.add(tmp);
</del><span class="cx">         }
</span><del>-        m_selectStack.clear();
</del><span class="cx"> 
</span><del>-        if (m_spilledTmp.isEmpty())
-            m_coalescedTmpsAtSpill.clear();
-        else
-            m_coloredTmp.clear();
</del><ins>+        ASSERT(highOrderAdjacents.size() &lt; m_regsInPriorityOrder.size());
+        return true;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-#if PLATFORM(COCOA)
-#pragma mark - Debugging helpers.
-#endif
-
-    void dumpInterferenceGraphInDot(PrintStream&amp; out)
</del><ins>+    bool precoloredCoalescingHeuristic(IndexType u, IndexType v)
</ins><span class="cx">     {
</span><del>-        out.print(&quot;graph InterferenceGraph { \n&quot;);
</del><ins>+        ASSERT(isPrecolored(u));
+        ASSERT(!isPrecolored(v));
</ins><span class="cx"> 
</span><del>-        HashSet&lt;Tmp&gt; tmpsWithInterferences;
-        for (const auto&amp; edge : m_interferenceEdges) {
-            tmpsWithInterferences.add(edge.first());
-            tmpsWithInterferences.add(edge.second());
</del><ins>+        // If any adjacent of the non-colored node is not an adjacent of the colored node AND has a degree &gt;= K
+        // there is a risk that this node needs to have the same color as our precolored node. If we coalesce such
+        // move, we may create an uncolorable graph.
+        const auto&amp; adjacentsOfV = m_adjacencyList[v];
+        for (unsigned adjacentTmpIndex : adjacentsOfV) {
+            if (!isPrecolored(adjacentTmpIndex)
+                &amp;&amp; !hasBeenSimplified(adjacentTmpIndex)
+                &amp;&amp; m_degrees[adjacentTmpIndex] &gt;= m_regsInPriorityOrder.size()
+                &amp;&amp; !m_interferenceEdges.contains(InterferenceEdge(u, adjacentTmpIndex)))
+                return false;
</ins><span class="cx">         }
</span><del>-
-        for (const auto&amp; tmp : tmpsWithInterferences)
-            out.print(&quot;    &quot;, tmp.internalValue(), &quot; [label=\&quot;&quot;, tmp, &quot; (&quot;, m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)], &quot;)\&quot;];\n&quot;);
-
-        for (const auto&amp; edge : m_interferenceEdges)
-            out.print(&quot;    &quot;, edge.first().internalValue(), &quot; -- &quot;, edge.second().internalValue(), &quot;;\n&quot;);
-        out.print(&quot;}\n&quot;);
</del><ins>+        return true;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    void dumpWorkLists(PrintStream&amp; out)
-    {
-        out.print(&quot;Simplify work list:\n&quot;);
-        for (Tmp tmp : m_simplifyWorklist)
-            out.print(&quot;    &quot;, tmp, &quot;\n&quot;);
-        out.printf(&quot;Moves work list is empty? %d\n&quot;, m_worklistMoves.isEmpty());
-        out.print(&quot;Freeze work list:\n&quot;);
-        for (Tmp tmp : m_freezeWorklist)
-            out.print(&quot;    &quot;, tmp, &quot;\n&quot;);
-        out.print(&quot;Spill work list:\n&quot;);
-        for (Tmp tmp : m_spillWorklist)
-            out.print(&quot;    &quot;, tmp, &quot;\n&quot;);
-    }
-
</del><ins>+protected:
</ins><span class="cx"> #if PLATFORM(COCOA)
</span><span class="cx"> #pragma mark -
</span><span class="cx"> #endif
</span><span class="lines">@@ -743,17 +447,15 @@
</span><span class="cx">         {
</span><span class="cx">         }
</span><span class="cx"> 
</span><del>-        InterferenceEdge(Tmp a, Tmp b)
</del><ins>+        InterferenceEdge(IndexType a, IndexType b)
</ins><span class="cx">         {
</span><del>-            ASSERT(a.internalValue());
-            ASSERT(b.internalValue());
</del><ins>+            ASSERT(a);
+            ASSERT(b);
</ins><span class="cx">             ASSERT_WITH_MESSAGE(a != b, &quot;A Tmp can never interfere with itself. Doing so would force it to be the superposition of two registers.&quot;);
</span><span class="cx"> 
</span><del>-            unsigned aInternal = a.internalValue();
-            unsigned bInternal = b.internalValue();
-            if (bInternal &lt; aInternal)
-                std::swap(aInternal, bInternal);
-            m_value = static_cast&lt;uint64_t&gt;(aInternal) &lt;&lt; 32 | bInternal;
</del><ins>+            if (b &lt; a)
+                std::swap(a, b);
+            m_value = static_cast&lt;uint64_t&gt;(a) &lt;&lt; 32 | b;
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         InterferenceEdge(WTF::HashTableDeletedValueType)
</span><span class="lines">@@ -761,14 +463,14 @@
</span><span class="cx">         {
</span><span class="cx">         }
</span><span class="cx"> 
</span><del>-        Tmp first() const
</del><ins>+        IndexType first() const
</ins><span class="cx">         {
</span><del>-            return Tmp::tmpForInternalValue(m_value &gt;&gt; 32);
</del><ins>+            return m_value &gt;&gt; 32 &amp; 0xffffffff;
</ins><span class="cx">         }
</span><span class="cx"> 
</span><del>-        Tmp second() const
</del><ins>+        IndexType second() const
</ins><span class="cx">         {
</span><del>-            return Tmp::tmpForInternalValue(m_value &amp; 0xffffffff);
</del><ins>+            return m_value &amp; 0xffffffff;
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         bool operator==(const InterferenceEdge other) const
</span><span class="lines">@@ -802,38 +504,35 @@
</span><span class="cx">     };
</span><span class="cx">     typedef SimpleClassHashTraits&lt;InterferenceEdge&gt; InterferenceEdgeHashTraits;
</span><span class="cx"> 
</span><del>-    Code&amp; m_code;
-    const UseCounts&lt;Tmp&gt;&amp; m_useCounts;
</del><ins>+    const Vector&lt;Reg&gt;&amp; m_regsInPriorityOrder;
+    IndexType m_lastPrecoloredRegisterIndex { 0 };
</ins><span class="cx"> 
</span><del>-    const HashSet&lt;Tmp&gt;&amp; m_unspillableTmp;
-    unsigned m_numberOfRegisters { 0 };
-
</del><span class="cx">     // The interference graph.
</span><span class="cx">     HashSet&lt;InterferenceEdge, InterferenceEdgeHash, InterferenceEdgeHashTraits&gt; m_interferenceEdges;
</span><del>-    Vector&lt;Vector&lt;Tmp, 0, UnsafeVectorOverflow, 4&gt;, 0, UnsafeVectorOverflow&gt; m_adjacencyList;
-    Vector&lt;unsigned, 0, UnsafeVectorOverflow&gt; m_degrees;
</del><ins>+    Vector&lt;Vector&lt;IndexType, 0, UnsafeVectorOverflow, 4&gt;, 0, UnsafeVectorOverflow&gt; m_adjacencyList;
+    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_degrees;
</ins><span class="cx"> 
</span><span class="cx">     // Instead of keeping track of the move instructions, we just keep their operands around and use the index
</span><span class="cx">     // in the vector as the &quot;identifier&quot; for the move.
</span><span class="cx">     struct MoveOperands {
</span><del>-        Tmp src;
-        Tmp dst;
</del><ins>+        IndexType srcIndex;
+        IndexType dstIndex;
</ins><span class="cx">     };
</span><span class="cx">     Vector&lt;MoveOperands, 0, UnsafeVectorOverflow&gt; m_coalescingCandidates;
</span><span class="cx"> 
</span><span class="cx">     // List of every move instruction associated with a Tmp.
</span><del>-    Vector&lt;HashSet&lt;unsigned, DefaultHash&lt;unsigned&gt;::Hash, WTF::UnsignedWithZeroKeyHashTraits&lt;unsigned&gt;&gt;&gt; m_moveList;
</del><ins>+    Vector&lt;HashSet&lt;IndexType, typename DefaultHash&lt;IndexType&gt;::Hash, WTF::UnsignedWithZeroKeyHashTraits&lt;IndexType&gt;&gt;&gt; m_moveList;
</ins><span class="cx"> 
</span><span class="cx">     // Colors.
</span><span class="cx">     Vector&lt;Reg, 0, UnsafeVectorOverflow&gt; m_coloredTmp;
</span><del>-    HashSet&lt;Tmp&gt; m_spilledTmp;
</del><ins>+    Vector&lt;IndexType&gt; m_spilledTmps;
</ins><span class="cx"> 
</span><span class="cx">     // Values that have been coalesced with an other value.
</span><del>-    Vector&lt;Tmp, 0, UnsafeVectorOverflow&gt; m_coalescedTmps;
</del><ins>+    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_coalescedTmps;
</ins><span class="cx"> 
</span><span class="cx">     // The stack of Tmp removed from the graph and ready for coloring.
</span><span class="cx">     BitVector m_isOnSelectStack;
</span><del>-    Vector&lt;Tmp&gt; m_selectStack;
</del><ins>+    Vector&lt;IndexType&gt; m_selectStack;
</ins><span class="cx"> 
</span><span class="cx">     struct OrderedMoveSet {
</span><span class="cx">         unsigned addMove()
</span><span class="lines">@@ -914,155 +613,571 @@
</span><span class="cx">     // Set of &quot;move&quot; not yet ready for coalescing.
</span><span class="cx">     BitVector m_activeMoves;
</span><span class="cx">     // Low-degree, non-Move related.
</span><del>-    Vector&lt;Tmp&gt; m_simplifyWorklist;
</del><ins>+    Vector&lt;IndexType&gt; m_simplifyWorklist;
</ins><span class="cx">     // High-degree Tmp.
</span><del>-    HashSet&lt;Tmp&gt; m_spillWorklist;
</del><ins>+    HashSet&lt;IndexType&gt; m_spillWorklist;
</ins><span class="cx">     // Low-degree, Move related.
</span><del>-    HashSet&lt;Tmp&gt; m_freezeWorklist;
</del><ins>+    HashSet&lt;IndexType&gt; m_freezeWorklist;
</ins><span class="cx"> 
</span><span class="cx">     bool m_hasSelectedSpill { false };
</span><span class="cx">     bool m_hasCoalescedNonTrivialMove { false };
</span><span class="cx"> 
</span><span class="cx">     // The mapping of Tmp to their alias for Moves that are always coalescing regardless of spilling.
</span><del>-    Vector&lt;Tmp, 0, UnsafeVectorOverflow&gt; m_coalescedTmpsAtSpill;
</del><ins>+    Vector&lt;IndexType, 0, UnsafeVectorOverflow&gt; m_coalescedTmpsAtSpill;
</ins><span class="cx"> };
</span><span class="cx"> 
</span><ins>+// This perform all the tasks that are specific to certain register type.
</ins><span class="cx"> template&lt;Arg::Type type&gt;
</span><del>-bool isUselessMoveInst(const Inst&amp; inst)
-{
-    return MoveInstHelper&lt;type&gt;::mayBeCoalescable(inst) &amp;&amp; inst.args[0].tmp() == inst.args[1].tmp();
-}
</del><ins>+class ColoringAllocator : public AbstractColoringAllocator&lt;unsigned&gt; {
+public:
+    ColoringAllocator(Code&amp; code, const UseCounts&lt;Tmp&gt;&amp; useCounts, const HashSet&lt;unsigned&gt;&amp; unspillableTmp)
+        : AbstractColoringAllocator&lt;unsigned&gt;(regsInPriorityOrder(type), AbsoluteTmpMapper&lt;type&gt;::lastMachineRegisterIndex(), tmpArraySize(code))
+        , m_code(code)
+        , m_useCounts(useCounts)
+        , m_unspillableTmps(unspillableTmp)
+    {
+        initializePrecoloredTmp();
+        build();
+        allocate();
+    }
</ins><span class="cx"> 
</span><del>-template&lt;Arg::Type type&gt;
-void assignRegisterToTmpInProgram(Code&amp; code, const IteratedRegisterCoalescingAllocator&lt;type&gt;&amp; allocator)
-{
-    for (BasicBlock* block : code) {
-        // Give Tmp a valid register.
-        for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
-            Inst&amp; inst = block-&gt;at(instIndex);
-            inst.forEachTmpFast([&amp;] (Tmp&amp; tmp) {
-                if (tmp.isReg() || tmp.isGP() == (type != Arg::GP))
-                    return;
</del><ins>+    Tmp getAlias(Tmp tmp) const
+    {
+        return AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(getAlias(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)));
+    }
</ins><span class="cx"> 
</span><del>-                Tmp aliasTmp = allocator.getAlias(tmp);
-                Tmp assignedTmp;
-                if (aliasTmp.isReg())
-                    assignedTmp = Tmp(aliasTmp.reg());
-                else {
-                    auto reg = allocator.allocatedReg(aliasTmp);
-                    ASSERT(reg);
-                    assignedTmp = Tmp(reg);
-                }
-                ASSERT(assignedTmp.isReg());
-                tmp = assignedTmp;
-            });
</del><ins>+    bool isUselessMove(const Inst&amp; inst) const
+    {
+        return mayBeCoalescable(inst) &amp;&amp; inst.args[0].tmp() == inst.args[1].tmp();
+    }
+
+    Tmp getAliasWhenSpilling(Tmp tmp) const
+    {
+        ASSERT_WITH_MESSAGE(!m_spilledTmps.isEmpty(), &quot;This function is only valid for coalescing during spilling.&quot;);
+
+        if (m_coalescedTmpsAtSpill.isEmpty())
+            return tmp;
+
+        unsigned aliasIndex = AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp);
+        while (unsigned nextAliasIndex = m_coalescedTmpsAtSpill[aliasIndex])
+            aliasIndex = nextAliasIndex;
+
+        Tmp alias = AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(aliasIndex);
+
+        ASSERT_WITH_MESSAGE(!m_spilledTmps.contains(aliasIndex) || alias == tmp, &quot;The aliases at spill should always be colorable. Something went horribly wrong.&quot;);
+
+        return alias;
+    }
+
+    template&lt;typename IndexIterator&gt;
+    class IndexToTmpIteratorAdaptor {
+    public:
+        IndexToTmpIteratorAdaptor(IndexIterator&amp;&amp; indexIterator)
+            : m_indexIterator(WTF::move(indexIterator))
+        {
</ins><span class="cx">         }
</span><span class="cx"> 
</span><del>-        // Remove all the useless moves we created in this block.
-        block-&gt;insts().removeAllMatching(isUselessMoveInst&lt;type&gt;);
</del><ins>+        Tmp operator*() const { return AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(*m_indexIterator); }
+        IndexToTmpIteratorAdaptor&amp; operator++() { ++m_indexIterator; return *this; }
+
+        bool operator==(const IndexToTmpIteratorAdaptor&amp; other) const
+        {
+            return m_indexIterator == other.m_indexIterator;
+        }
+
+        bool operator!=(const IndexToTmpIteratorAdaptor&amp; other) const
+        {
+            return !(*this == other);
+        }
+
+    private:
+        IndexIterator m_indexIterator;
+    };
+
+    template&lt;typename Collection&gt;
+    class IndexToTmpIterableAdaptor {
+    public:
+        IndexToTmpIterableAdaptor(const Collection&amp; collection)
+            : m_collection(collection)
+        {
+        }
+
+        IndexToTmpIteratorAdaptor&lt;typename Collection::const_iterator&gt; begin() const
+        {
+            return m_collection.begin();
+        }
+
+        IndexToTmpIteratorAdaptor&lt;typename Collection::const_iterator&gt; end() const
+        {
+            return m_collection.end();
+        }
+
+    private:
+        const Collection&amp; m_collection;
+    };
+
+    IndexToTmpIterableAdaptor&lt;Vector&lt;unsigned&gt;&gt; spilledTmps() const { return m_spilledTmps; }
+
+    bool requiresSpilling() const { return !m_spilledTmps.isEmpty(); }
+
+    Reg allocatedReg(Tmp tmp) const
+    {
+        ASSERT(!tmp.isReg());
+        ASSERT(m_coloredTmp.size());
+        ASSERT(tmp.isGP() == (type == Arg::GP));
+
+        Reg reg = m_coloredTmp[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)];
+        if (!reg) {
+            // We only care about Tmps that interfere. A Tmp that never interfere with anything
+            // can take any register.
+            reg = regsInPriorityOrder(type).first();
+        }
+        return reg;
</ins><span class="cx">     }
</span><del>-}
</del><span class="cx"> 
</span><del>-template&lt;Arg::Type type&gt;
-void addSpillAndFillToProgram(Code&amp; code, const IteratedRegisterCoalescingAllocator&lt;type&gt;&amp; allocator, HashSet&lt;Tmp&gt;&amp; unspillableTmp)
-{
-    const HashSet&lt;Tmp&gt;&amp; spilledTmp = allocator.spilledTmp();
</del><ins>+private:
+    static unsigned tmpArraySize(Code&amp; code)
+    {
+        unsigned numTmps = code.numTmps(type);
+        return AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(numTmps);
+    }
</ins><span class="cx"> 
</span><del>-    // All the spilled values become unspillable.
-    unspillableTmp.add(spilledTmp.begin(), spilledTmp.end());
</del><ins>+    void initializePrecoloredTmp()
+    {
+        m_coloredTmp.resize(m_lastPrecoloredRegisterIndex + 1);
+        for (unsigned i = 1; i &lt;= m_lastPrecoloredRegisterIndex; ++i) {
+            Tmp tmp = AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(i);
+            ASSERT(tmp.isReg());
+            m_coloredTmp[i] = tmp.reg();
+        }
+    }
</ins><span class="cx"> 
</span><del>-    // Allocate stack slot for each spilled value.
-    HashMap&lt;Tmp, StackSlot*&gt; stackSlots;
-    for (Tmp tmp : spilledTmp) {
-        bool isNewTmp = stackSlots.add(tmp, code.addStackSlot(8, StackSlotKind::Anonymous)).isNewEntry;
-        ASSERT_UNUSED(isNewTmp, isNewTmp);
</del><ins>+    void build()
+    {
+        TmpLiveness&lt;type&gt; liveness(m_code);
+        for (BasicBlock* block : m_code) {
+            typename TmpLiveness&lt;type&gt;::LocalCalc localCalc(liveness, block);
+            for (unsigned instIndex = block-&gt;size(); instIndex--;) {
+                Inst&amp; inst = block-&gt;at(instIndex);
+                Inst* nextInst = instIndex + 1 &lt; block-&gt;size() ? &amp;block-&gt;at(instIndex + 1) : nullptr;
+                build(inst, nextInst, localCalc);
+                localCalc.execute(instIndex);
+            }
+        }
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    // Rewrite the program to get rid of the spilled Tmp.
-    InsertionSet insertionSet(code);
-    for (BasicBlock* block : code) {
-        bool hasAliasedTmps = false;
</del><ins>+    void build(Inst&amp; inst, Inst* nextInst, const typename TmpLiveness&lt;type&gt;::LocalCalc&amp; localCalc)
+    {
+        inst.forEachTmpWithExtraClobberedRegs(
+            nextInst,
+            [&amp;] (const Tmp&amp; arg, Arg::Role role, Arg::Type argType) {
+                if (!Arg::isDef(role) || argType != type)
+                    return;
+                
+                // All the Def()s interfere with each other and with all the extra clobbered Tmps.
+                // We should not use forEachDefAndExtraClobberedTmp() here since colored Tmps
+                // do not need interference edges in our implementation.
+                inst.forEachTmp([&amp;] (Tmp&amp; otherArg, Arg::Role role, Arg::Type argType) {
+                    if (!Arg::isDef(role) || argType != type)
+                        return;
</ins><span class="cx"> 
</span><del>-        for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
-            Inst&amp; inst = block-&gt;at(instIndex);
</del><ins>+                    addEdge(arg, otherArg);
+                });
+            });
</ins><span class="cx"> 
</span><del>-            // Try to replace the register use by memory use when possible.
-            for (unsigned i = 0; i &lt; inst.args.size(); ++i) {
-                Arg&amp; arg = inst.args[i];
-                if (arg.isTmp() &amp;&amp; arg.type() == type &amp;&amp; !arg.isReg()) {
-                    auto stackSlotEntry = stackSlots.find(arg.tmp());
-                    if (stackSlotEntry != stackSlots.end() &amp;&amp; inst.admitsStack(i))
-                        arg = Arg::stack(stackSlotEntry-&gt;value);
</del><ins>+        if (mayBeCoalescable(inst)) {
+            // We do not want the Use() of this move to interfere with the Def(), even if it is live
+            // after the Move. If we were to add the interference edge, it would be impossible to
+            // coalesce the Move even if the two Tmp never interfere anywhere.
+            Tmp defTmp;
+            Tmp useTmp;
+            inst.forEachTmp([&amp;defTmp, &amp;useTmp] (Tmp&amp; argTmp, Arg::Role role, Arg::Type) {
+                if (Arg::isDef(role))
+                    defTmp = argTmp;
+                else {
+                    ASSERT(Arg::isEarlyUse(role));
+                    useTmp = argTmp;
</ins><span class="cx">                 }
</span><ins>+            });
+            ASSERT(defTmp);
+            ASSERT(useTmp);
+
+            unsigned nextMoveIndex = m_coalescingCandidates.size();
+            m_coalescingCandidates.append({ AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(useTmp), AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(defTmp) });
+
+            unsigned newIndexInWorklist = m_worklistMoves.addMove();
+            ASSERT_UNUSED(newIndexInWorklist, newIndexInWorklist == nextMoveIndex);
+
+            ASSERT(nextMoveIndex &lt;= m_activeMoves.size());
+            m_activeMoves.ensureSize(nextMoveIndex + 1);
+
+            for (const Arg&amp; arg : inst.args) {
+                auto&amp; list = m_moveList[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(arg.tmp())];
+                list.add(nextMoveIndex);
</ins><span class="cx">             }
</span><span class="cx"> 
</span><del>-            // For every other case, add Load/Store as needed.
-            inst.forEachTmp([&amp;] (Tmp&amp; tmp, Arg::Role role, Arg::Type argType) {
-                if (tmp.isReg() || argType != type)
-                    return;
</del><ins>+            for (const Tmp&amp; liveTmp : localCalc.live()) {
+                if (liveTmp != useTmp)
+                    addEdge(defTmp, liveTmp);
+            }
</ins><span class="cx"> 
</span><del>-                auto stackSlotEntry = stackSlots.find(tmp);
-                if (stackSlotEntry == stackSlots.end()) {
-                    Tmp alias = allocator.getAliasWhenSpilling(tmp);
-                    if (alias != tmp) {
-                        tmp = alias;
-                        hasAliasedTmps = true;
</del><ins>+            // The next instruction could have early clobbers. We need to consider those now.
+            if (nextInst &amp;&amp; nextInst-&gt;hasSpecial()) {
+                nextInst-&gt;extraEarlyClobberedRegs().forEach([&amp;] (Reg reg) {
+                    if (reg.isGPR() == (type == Arg::GP)) {
+                        for (const Tmp&amp; liveTmp : localCalc.live())
+                            addEdge(Tmp(reg), liveTmp);
</ins><span class="cx">                     }
</span><ins>+                });
+            }
+        } else
+            addEdges(inst, nextInst, localCalc.live());
+    }
+
+    void addEdges(Inst&amp; inst, Inst* nextInst, typename TmpLiveness&lt;type&gt;::LocalCalc::Iterable liveTmps)
+    {
+        // All the Def()s interfere with everthing live.
+        inst.forEachTmpWithExtraClobberedRegs(
+            nextInst,
+            [&amp;] (const Tmp&amp; arg, Arg::Role role, Arg::Type argType) {
+                if (!Arg::isDef(role) || argType != type)
</ins><span class="cx">                     return;
</span><ins>+                
+                for (const Tmp&amp; liveTmp : liveTmps) {
+                    if (liveTmp.isGP() == (type == Arg::GP))
+                        addEdge(arg, liveTmp);
</ins><span class="cx">                 }
</span><ins>+            });
+    }
</ins><span class="cx"> 
</span><del>-                Arg arg = Arg::stack(stackSlotEntry-&gt;value);
-                Opcode move = type == Arg::GP ? Move : MoveDouble;
</del><ins>+    void addEdge(Tmp a, Tmp b)
+    {
+        ASSERT_WITH_MESSAGE(a.isGP() == b.isGP(), &quot;An interference between registers of different types does not make sense, it can lead to non-colorable graphs.&quot;);
</ins><span class="cx"> 
</span><del>-                if (Arg::isAnyUse(role)) {
-                    Tmp newTmp = code.newTmp(type);
-                    insertionSet.insert(instIndex, move, inst.origin, arg, newTmp);
-                    tmp = newTmp;
</del><ins>+        addEdge(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(a), AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(b));
+    }
</ins><span class="cx"> 
</span><del>-                    // Any new Fill() should never be spilled.
-                    unspillableTmp.add(tmp);
-                }
-                if (Arg::isDef(role))
-                    insertionSet.insert(instIndex + 1, move, inst.origin, tmp, arg);
-            });
</del><ins>+    bool mayBeCoalescable(const Inst&amp; inst) const
+    {
+        switch (type) {
+        case Arg::GP:
+            switch (inst.opcode) {
+            case Move:
+                break;
+            default:
+                return false;
+            }
+            break;
+        case Arg::FP:
+            switch (inst.opcode) {
+            case MoveFloat:
+            case MoveDouble:
+                break;
+            default:
+                return false;
+            }
+            break;
</ins><span class="cx">         }
</span><del>-        insertionSet.execute(block);
</del><span class="cx"> 
</span><del>-        if (hasAliasedTmps)
-            block-&gt;insts().removeAllMatching(isUselessMoveInst&lt;type&gt;);
</del><ins>+        ASSERT_WITH_MESSAGE(inst.args.size() == 2, &quot;We assume coalecable moves only have two arguments in a few places.&quot;);
+
+        if (!inst.args[0].isTmp() || !inst.args[1].isTmp())
+            return false;
+
+        ASSERT(inst.args[0].type() == type);
+        ASSERT(inst.args[1].type() == type);
+
+        return true;
</ins><span class="cx">     }
</span><del>-}
</del><span class="cx"> 
</span><del>-template&lt;Arg::Type type&gt;
-void iteratedRegisterCoalescingOnType(
-    Code&amp; code, const UseCounts&lt;Tmp&gt;&amp; useCounts, unsigned&amp; numIterations)
-{
-    HashSet&lt;Tmp&gt; unspillableTmps;
-    while (true) {
-        numIterations++;
-        IteratedRegisterCoalescingAllocator&lt;type&gt; allocator(code, useCounts, unspillableTmps);
-        if (allocator.spilledTmp().isEmpty()) {
-            assignRegisterToTmpInProgram(code, allocator);
-            return;
</del><ins>+    void selectSpill()
+    {
+        if (!m_hasSelectedSpill) {
+            m_hasSelectedSpill = true;
+
+            if (m_hasCoalescedNonTrivialMove)
+                m_coalescedTmpsAtSpill = m_coalescedTmps;
</ins><span class="cx">         }
</span><del>-        addSpillAndFillToProgram&lt;type&gt;(code, allocator, unspillableTmps);
</del><ins>+
+        auto iterator = m_spillWorklist.begin();
+
+        while (iterator != m_spillWorklist.end() &amp;&amp; m_unspillableTmps.contains(*iterator))
+            ++iterator;
+
+        RELEASE_ASSERT_WITH_MESSAGE(iterator != m_spillWorklist.end(), &quot;It is not possible to color the Air graph with the number of available registers.&quot;);
+
+        // Higher score means more desirable to spill. Lower scores maximize the likelihood that a tmp
+        // gets a register.
+        auto score = [&amp;] (Tmp tmp) -&gt; double {
+            // Air exposes the concept of &quot;fast tmps&quot;, and we interpret that to mean that the tmp
+            // should always be in a register.
+            if (m_code.isFastTmp(tmp))
+                return 0;
+            
+            // All else being equal, the score should be directly related to the degree.
+            double degree = static_cast&lt;double&gt;(m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)]);
+
+            // All else being equal, the score should be inversely related to the number of warm uses and
+            // defs.
+            const UseCounts&lt;Tmp&gt;::Counts&amp; counts = m_useCounts[tmp];
+            double uses = counts.numWarmUses + counts.numDefs;
+
+            return degree / uses;
+        };
+
+        auto victimIterator = iterator;
+        double maxScore = score(AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(*iterator));
+
+        ++iterator;
+        for (;iterator != m_spillWorklist.end(); ++iterator) {
+            double tmpScore = score(AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(*iterator));
+            if (tmpScore &gt; maxScore) {
+                if (m_unspillableTmps.contains(*iterator))
+                    continue;
+
+                victimIterator = iterator;
+                maxScore = tmpScore;
+            }
+        }
+
+        unsigned victimIndex = *victimIterator;
+        m_spillWorklist.remove(victimIterator);
+        m_simplifyWorklist.append(victimIndex);
+
+        freezeMoves(victimIndex);
</ins><span class="cx">     }
</span><del>-}
</del><span class="cx"> 
</span><ins>+    void allocate()
+    {
+        ASSERT_WITH_MESSAGE(m_activeMoves.size() &gt;= m_coalescingCandidates.size(), &quot;The activeMove set should be big enough for the quick operations of BitVector.&quot;);
+
+        makeWorkList();
+
+        if (debug) {
+            dataLog(&quot;Interference: &quot;, listDump(m_interferenceEdges), &quot;\n&quot;);
+            dumpInterferenceGraphInDot(WTF::dataFile());
+            dataLog(&quot;Initial work list\n&quot;);
+            dumpWorkLists(WTF::dataFile());
+        }
+
+        do {
+            if (traceDebug) {
+                dataLog(&quot;Before Graph simplification iteration\n&quot;);
+                dumpWorkLists(WTF::dataFile());
+            }
+
+            if (!m_simplifyWorklist.isEmpty())
+                simplify();
+            else if (!m_worklistMoves.isEmpty())
+                coalesce();
+            else if (!m_freezeWorklist.isEmpty())
+                freeze();
+            else if (!m_spillWorklist.isEmpty())
+                selectSpill();
+
+            if (traceDebug) {
+                dataLog(&quot;After Graph simplification iteration\n&quot;);
+                dumpWorkLists(WTF::dataFile());
+            }
+        } while (!m_simplifyWorklist.isEmpty() || !m_worklistMoves.isEmpty() || !m_freezeWorklist.isEmpty() || !m_spillWorklist.isEmpty());
+
+        assignColors();
+    }
+
+#if PLATFORM(COCOA)
+#pragma mark - Debugging helpers.
+#endif
+
+    void dumpInterferenceGraphInDot(PrintStream&amp; out)
+    {
+        out.print(&quot;graph InterferenceGraph { \n&quot;);
+
+        HashSet&lt;Tmp&gt; tmpsWithInterferences;
+        for (const auto&amp; edge : m_interferenceEdges) {
+            tmpsWithInterferences.add(AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(edge.first()));
+            tmpsWithInterferences.add(AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(edge.second()));
+        }
+
+        for (const auto&amp; tmp : tmpsWithInterferences)
+            out.print(&quot;    &quot;, tmp.internalValue(), &quot; [label=\&quot;&quot;, tmp, &quot; (&quot;, m_degrees[AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp)], &quot;)\&quot;];\n&quot;);
+
+        for (const auto&amp; edge : m_interferenceEdges)
+            out.print(&quot;    &quot;, edge.first(), &quot; -- &quot;, edge.second(), &quot;;\n&quot;);
+        out.print(&quot;}\n&quot;);
+    }
+
+    void dumpWorkLists(PrintStream&amp; out)
+    {
+        out.print(&quot;Simplify work list:\n&quot;);
+        for (unsigned tmpIndex : m_simplifyWorklist)
+            out.print(&quot;    &quot;, AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
+        out.printf(&quot;Moves work list is empty? %d\n&quot;, m_worklistMoves.isEmpty());
+        out.print(&quot;Freeze work list:\n&quot;);
+        for (unsigned tmpIndex : m_freezeWorklist)
+            out.print(&quot;    &quot;, AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
+        out.print(&quot;Spill work list:\n&quot;);
+        for (unsigned tmpIndex : m_spillWorklist)
+            out.print(&quot;    &quot;, AbsoluteTmpMapper&lt;type&gt;::tmpFromAbsoluteIndex(tmpIndex), &quot;\n&quot;);
+    }
+
+    using AbstractColoringAllocator&lt;unsigned&gt;::addEdge;
+    using AbstractColoringAllocator&lt;unsigned&gt;::getAlias;
+
+    Code&amp; m_code;
+    // FIXME: spilling should not type specific. It is only a side effect of using UseCounts.
+    const UseCounts&lt;Tmp&gt;&amp; m_useCounts;
+    const HashSet&lt;unsigned&gt;&amp; m_unspillableTmps;
+};
+
+class IteratedRegisterCoalescing {
+public:
+    IteratedRegisterCoalescing(Code&amp; code)
+        : m_code(code)
+        , m_useCounts(code)
+    {
+    }
+
+    void run()
+    {
+        iteratedRegisterCoalescingOnType&lt;Arg::GP&gt;();
+        iteratedRegisterCoalescingOnType&lt;Arg::FP&gt;();
+
+        if (reportStats)
+            dataLog(&quot;Num iterations = &quot;, m_numIterations, &quot;\n&quot;);
+    }
+
+private:
+    template&lt;Arg::Type type&gt;
+    void iteratedRegisterCoalescingOnType()
+    {
+        HashSet&lt;unsigned&gt; unspillableTmps;
+        while (true) {
+            ++m_numIterations;
+            ColoringAllocator&lt;type&gt; allocator(m_code, m_useCounts, unspillableTmps);
+            if (!allocator.requiresSpilling()) {
+                assignRegistersToTmp(allocator);
+                return;
+            }
+            addSpillAndFill&lt;type&gt;(allocator, unspillableTmps);
+        }
+    }
+
+    template&lt;Arg::Type type&gt;
+    void assignRegistersToTmp(const ColoringAllocator&lt;type&gt;&amp; allocator)
+    {
+        for (BasicBlock* block : m_code) {
+            // Give Tmp a valid register.
+            for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
+                Inst&amp; inst = block-&gt;at(instIndex);
+                inst.forEachTmpFast([&amp;] (Tmp&amp; tmp) {
+                    if (tmp.isReg() || tmp.isGP() == (type != Arg::GP))
+                        return;
+
+                    Tmp aliasTmp = allocator.getAlias(tmp);
+                    Tmp assignedTmp;
+                    if (aliasTmp.isReg())
+                        assignedTmp = Tmp(aliasTmp.reg());
+                    else {
+                        auto reg = allocator.allocatedReg(aliasTmp);
+                        ASSERT(reg);
+                        assignedTmp = Tmp(reg);
+                    }
+                    ASSERT(assignedTmp.isReg());
+                    tmp = assignedTmp;
+                });
+            }
+
+            // Remove all the useless moves we created in this block.
+            block-&gt;insts().removeAllMatching([&amp;] (const Inst&amp; inst) {
+                return allocator.isUselessMove(inst);
+            });
+        }
+    }
+
+    template&lt;Arg::Type type&gt;
+    void addSpillAndFill(const ColoringAllocator&lt;type&gt;&amp; allocator, HashSet&lt;unsigned&gt;&amp; unspillableTmps)
+    {
+        HashMap&lt;Tmp, StackSlot*&gt; stackSlots;
+        for (Tmp tmp : allocator.spilledTmps()) {
+            // All the spilled values become unspillable.
+            unspillableTmps.add(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp));
+
+            // Allocate stack slot for each spilled value.
+            bool isNewTmp = stackSlots.add(tmp, m_code.addStackSlot(8, StackSlotKind::Anonymous)).isNewEntry;
+            ASSERT_UNUSED(isNewTmp, isNewTmp);
+        }
+
+        // Rewrite the program to get rid of the spilled Tmp.
+        InsertionSet insertionSet(m_code);
+        for (BasicBlock* block : m_code) {
+            bool hasAliasedTmps = false;
+
+            for (unsigned instIndex = 0; instIndex &lt; block-&gt;size(); ++instIndex) {
+                Inst&amp; inst = block-&gt;at(instIndex);
+
+                // Try to replace the register use by memory use when possible.
+                for (unsigned i = 0; i &lt; inst.args.size(); ++i) {
+                    Arg&amp; arg = inst.args[i];
+                    if (arg.isTmp() &amp;&amp; arg.type() == type &amp;&amp; !arg.isReg()) {
+                        auto stackSlotEntry = stackSlots.find(arg.tmp());
+                        if (stackSlotEntry != stackSlots.end() &amp;&amp; inst.admitsStack(i))
+                            arg = Arg::stack(stackSlotEntry-&gt;value);
+                    }
+                }
+
+                // For every other case, add Load/Store as needed.
+                inst.forEachTmp([&amp;] (Tmp&amp; tmp, Arg::Role role, Arg::Type argType) {
+                    if (tmp.isReg() || argType != type)
+                        return;
+
+                    auto stackSlotEntry = stackSlots.find(tmp);
+                    if (stackSlotEntry == stackSlots.end()) {
+                        Tmp alias = allocator.getAliasWhenSpilling(tmp);
+                        if (alias != tmp) {
+                            tmp = alias;
+                            hasAliasedTmps = true;
+                        }
+                        return;
+                    }
+
+                    Arg arg = Arg::stack(stackSlotEntry-&gt;value);
+                    Opcode move = type == Arg::GP ? Move : MoveDouble;
+
+                    if (Arg::isAnyUse(role)) {
+                        Tmp newTmp = m_code.newTmp(type);
+                        insertionSet.insert(instIndex, move, inst.origin, arg, newTmp);
+                        tmp = newTmp;
+
+                        // Any new Fill() should never be spilled.
+                        unspillableTmps.add(AbsoluteTmpMapper&lt;type&gt;::absoluteIndex(tmp));
+                    }
+                    if (Arg::isDef(role))
+                        insertionSet.insert(instIndex + 1, move, inst.origin, tmp, arg);
+                });
+            }
+            insertionSet.execute(block);
+
+            if (hasAliasedTmps) {
+                block-&gt;insts().removeAllMatching([&amp;] (const Inst&amp; inst) {
+                    return allocator.isUselessMove(inst);
+                });
+            }
+        }
+    }
+
+    Code&amp; m_code;
+    UseCounts&lt;Tmp&gt; m_useCounts;
+    unsigned m_numIterations { 0 };
+};
+
</ins><span class="cx"> } // anonymous namespace
</span><span class="cx"> 
</span><span class="cx"> void iteratedRegisterCoalescing(Code&amp; code)
</span><span class="cx"> {
</span><span class="cx">     PhaseScope phaseScope(code, &quot;iteratedRegisterCoalescing&quot;);
</span><span class="cx"> 
</span><del>-    unsigned numIterations = 0;
-
-    UseCounts&lt;Tmp&gt; useCounts(code);
-    iteratedRegisterCoalescingOnType&lt;Arg::GP&gt;(code, useCounts, numIterations);
-    iteratedRegisterCoalescingOnType&lt;Arg::FP&gt;(code, useCounts, numIterations);
-
-    if (reportStats)
-        dataLog(&quot;Num iterations = &quot;, numIterations, &quot;\n&quot;);
</del><ins>+    IteratedRegisterCoalescing iteratedRegisterCoalescing(code);
+    iteratedRegisterCoalescing.run();
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> } } } // namespace JSC::B3::Air
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreb3airAirTmpInlinesh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/b3/air/AirTmpInlines.h (194315 => 194316)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/b3/air/AirTmpInlines.h        2015-12-19 19:01:50 UTC (rev 194315)
+++ trunk/Source/JavaScriptCore/b3/air/AirTmpInlines.h        2015-12-19 21:04:55 UTC (rev 194316)
</span><span class="lines">@@ -57,6 +57,11 @@
</span><span class="cx">         return absoluteIndex(Tmp::gpTmpForIndex(tmpIndex));
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    static unsigned lastMachineRegisterIndex()
+    {
+        return absoluteIndex(Tmp(MacroAssembler::lastRegister()));
+    }
+
</ins><span class="cx">     static Tmp tmpFromAbsoluteIndex(unsigned tmpIndex)
</span><span class="cx">     {
</span><span class="cx">         return Tmp::tmpForInternalValue(tmpIndex);
</span><span class="lines">@@ -77,6 +82,11 @@
</span><span class="cx">         return absoluteIndex(Tmp::fpTmpForIndex(tmpIndex));
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    static unsigned lastMachineRegisterIndex()
+    {
+        return absoluteIndex(Tmp(MacroAssembler::lastFPRegister()));
+    }
+
</ins><span class="cx">     static Tmp tmpFromAbsoluteIndex(unsigned tmpIndex)
</span><span class="cx">     {
</span><span class="cx">         return Tmp::tmpForInternalValue(-tmpIndex);
</span></span></pre>
</div>
</div>

</body>
</html>