<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[281757] trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/281757">281757</a></dd>
<dt>Author</dt> <dd>commit-queue@webkit.org</dd>
<dt>Date</dt> <dd>2021-08-30 08:11:39 -0700 (Mon, 30 Aug 2021)</dd>
</dl>

<h3>Log Message</h3>
<pre>RISCV64 support in LLInt
https://bugs.webkit.org/show_bug.cgi?id=229035
<rdar://problem/82120908>

Patch by Zan Dobersek <zdobersek@igalia.com> on 2021-08-30
Reviewed by Yusuke Suzuki.

.:

* Source/cmake/WebKitFeatures.cmake:
Don't force CLoop to be default for RISCV64 anymore.

Source/JavaScriptCore:

Introduce RISCV64 support at the LLint level. Along with the necessary
offlineasm backend, plenty of miscellaneous code around MacroAssembler
infrastructure is also introduced.

Of the existing supported architectures, RISCV64 is most similar to
ARM64, with the same word size and similar register abundance. This is
mirrored in most changes around the MacroAssembler infrastructure as
well as using same or similar codepaths in LLint for the two ISAs.

For the MacroAssembler infrastructure, it won't be used until proper JIT
support is introduced, but the basic facilities are still necessary to
keep things compiling without complicating the configuration matrix.
MacroAssemblerRISCV64 class provides no-op methods through C++ templates
while RISCV64Assembler is also added in a limited form.

The riscv64 offlineasm backend covers assembly generation for
instructions that are exhibited by LLInt in the current configuration.
It doesn't cover instructions that e.g. are only used in the WebAssembly
opcodes, and WebAssembly won't be enabled until the higher JIT tiers are
supported anyway.

The offlineasm backend's assembly generation for specific instructions
uses pattern matching of operand types for better overview of how
resulting assembly is constructed. There's still certain improvements
possible, e.g. in how scratch registers for more expansive operations
are allocated.

* CMakeLists.txt:
* Sources.txt:
* assembler/AbstractMacroAssembler.h:
* assembler/MacroAssembler.h:
* assembler/MacroAssemblerRISCV64.cpp: Added.
(JSC::MacroAssembler::probe):
* assembler/MacroAssemblerRISCV64.h: Added.
Distorted auto-generated method list removed. Necessary methods are
introduced through no-op templates until actually needed for JIT
generation.
* assembler/MaxFrameExtentForSlowPathCall.h:
* assembler/PerfLog.cpp:
* assembler/ProbeContext.h:
* assembler/RISCV64Assembler.h: Added.
(JSC::RISCV64Assembler::firstRegister):
(JSC::RISCV64Assembler::lastRegister):
(JSC::RISCV64Assembler::numberOfRegisters):
(JSC::RISCV64Assembler::firstSPRegister):
(JSC::RISCV64Assembler::lastSPRegister):
(JSC::RISCV64Assembler::numberOfSPRegisters):
(JSC::RISCV64Assembler::firstFPRegister):
(JSC::RISCV64Assembler::lastFPRegister):
(JSC::RISCV64Assembler::numberOfFPRegisters):
(JSC::RISCV64Assembler::gprName):
(JSC::RISCV64Assembler::sprName):
(JSC::RISCV64Assembler::fprName):
(JSC::RISCV64Assembler::RISCV64Assembler):
(JSC::RISCV64Assembler::buffer):
(JSC::RISCV64Assembler::invert):
(JSC::RISCV64Assembler::getRelocatedAddress):
(JSC::RISCV64Assembler::codeSize const):
(JSC::RISCV64Assembler::getCallReturnOffset):
(JSC::RISCV64Assembler::labelIgnoringWatchpoints):
(JSC::RISCV64Assembler::labelForWatchpoint):
(JSC::RISCV64Assembler::label):
(JSC::RISCV64Assembler::linkJump):
(JSC::RISCV64Assembler::linkCall):
(JSC::RISCV64Assembler::linkPointer):
(JSC::RISCV64Assembler::maxJumpReplacementSize):
(JSC::RISCV64Assembler::patchableJumpSize):
(JSC::RISCV64Assembler::repatchPointer):
(JSC::RISCV64Assembler::relinkJump):
(JSC::RISCV64Assembler::relinkJumpToNop):
(JSC::RISCV64Assembler::relinkCall):
(JSC::RISCV64Assembler::debugOffset):
(JSC::RISCV64Assembler::cacheFlush):
(JSC::RISCV64Assembler::fillNops):
* assembler/RISCV64Registers.h: Added.
* jit/FPRInfo.h:
(JSC::FPRInfo::toRegister):
(JSC::FPRInfo::toArgumentRegister):
(JSC::FPRInfo::toIndex):
(JSC::FPRInfo::debugName):
* jit/GPRInfo.h:
(JSC::GPRInfo::toRegister):
(JSC::GPRInfo::toArgumentRegister):
(JSC::GPRInfo::toIndex):
(JSC::GPRInfo::debugName):
* jit/RegisterSet.cpp:
(JSC::RegisterSet::vmCalleeSaveRegisters):
(JSC::RegisterSet::llintBaselineCalleeSaveRegisters):
* llint/LLIntData.h:
* llint/LLIntOfflineAsmConfig.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter64.asm:
* offlineasm/backends.rb: Reference the riscv64 backend as required.
* offlineasm/registers.rb: List additional possible registers.
* offlineasm/riscv64.rb: Added.

Source/WTF:

* wtf/PlatformEnable.h:
Define ENABLE_LLINT_EMBEDDED_OPCODE_ID to 1 for CPU(RISCV64).</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkChangeLog">trunk/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreCMakeListstxt">trunk/Source/JavaScriptCore/CMakeLists.txt</a></li>
<li><a href="#trunkSourceJavaScriptCoreChangeLog">trunk/Source/JavaScriptCore/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreSourcestxt">trunk/Source/JavaScriptCore/Sources.txt</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerAbstractMacroAssemblerh">trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerMacroAssemblerh">trunk/Source/JavaScriptCore/assembler/MacroAssembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerMaxFrameExtentForSlowPathCallh">trunk/Source/JavaScriptCore/assembler/MaxFrameExtentForSlowPathCall.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerPerfLogcpp">trunk/Source/JavaScriptCore/assembler/PerfLog.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerProbeContexth">trunk/Source/JavaScriptCore/assembler/ProbeContext.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitFPRInfoh">trunk/Source/JavaScriptCore/jit/FPRInfo.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitGPRInfoh">trunk/Source/JavaScriptCore/jit/GPRInfo.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitRegisterSetcpp">trunk/Source/JavaScriptCore/jit/RegisterSet.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorellintLLIntDatah">trunk/Source/JavaScriptCore/llint/LLIntData.h</a></li>
<li><a href="#trunkSourceJavaScriptCorellintLLIntOfflineAsmConfigh">trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h</a></li>
<li><a href="#trunkSourceJavaScriptCorellintLowLevelInterpreterasm">trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm</a></li>
<li><a href="#trunkSourceJavaScriptCorellintLowLevelInterpreter64asm">trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm</a></li>
<li><a href="#trunkSourceJavaScriptCoreofflineasmbackendsrb">trunk/Source/JavaScriptCore/offlineasm/backends.rb</a></li>
<li><a href="#trunkSourceJavaScriptCoreofflineasmregistersrb">trunk/Source/JavaScriptCore/offlineasm/registers.rb</a></li>
<li><a href="#trunkSourceWTFChangeLog">trunk/Source/WTF/ChangeLog</a></li>
<li><a href="#trunkSourceWTFwtfPlatformEnableh">trunk/Source/WTF/wtf/PlatformEnable.h</a></li>
<li><a href="#trunkSourcecmakeWebKitFeaturescmake">trunk/Source/cmake/WebKitFeatures.cmake</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreassemblerMacroAssemblerRISCV64cpp">trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerMacroAssemblerRISCV64h">trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerRISCV64Assemblerh">trunk/Source/JavaScriptCore/assembler/RISCV64Assembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerRISCV64Registersh">trunk/Source/JavaScriptCore/assembler/RISCV64Registers.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreofflineasmriscv64rb">trunk/Source/JavaScriptCore/offlineasm/riscv64.rb</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/ChangeLog (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/ChangeLog  2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/ChangeLog     2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -1,3 +1,14 @@
</span><ins>+2021-08-30  Zan Dobersek  <zdobersek@igalia.com>
+
+        RISCV64 support in LLInt
+        https://bugs.webkit.org/show_bug.cgi?id=229035
+        <rdar://problem/82120908>
+
+        Reviewed by Yusuke Suzuki.
+
+        * Source/cmake/WebKitFeatures.cmake:
+        Don't force CLoop to be default for RISCV64 anymore.
+
</ins><span class="cx"> 2021-08-27  Stephan Szabo  <stephan.szabo@sony.com>
</span><span class="cx"> 
</span><span class="cx">         [PlayStation][CMake] Add control over whether JavaScriptCore should be shared
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreCMakeListstxt"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/CMakeLists.txt (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/CMakeLists.txt       2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/CMakeLists.txt  2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -216,6 +216,7 @@
</span><span class="cx">     offlineasm/parser.rb
</span><span class="cx">     offlineasm/registers.rb
</span><span class="cx">     offlineasm/risc.rb
</span><ins>+    offlineasm/riscv64.rb
</ins><span class="cx">     offlineasm/self_hash.rb
</span><span class="cx">     offlineasm/settings.rb
</span><span class="cx">     offlineasm/transform.rb
</span><span class="lines">@@ -273,6 +274,8 @@
</span><span class="cx">         set(OFFLINE_ASM_BACKEND "ARMv7")
</span><span class="cx">     elseif (WTF_CPU_MIPS)
</span><span class="cx">         set(OFFLINE_ASM_BACKEND "MIPS")
</span><ins>+    elseif (WTF_CPU_RISCV64)
+        set(OFFLINE_ASM_BACKEND "RISCV64")
</ins><span class="cx">     endif ()
</span><span class="cx"> 
</span><span class="cx">     if (NOT ENABLE_JIT)
</span><span class="lines">@@ -577,9 +580,12 @@
</span><span class="cx">     assembler/MacroAssemblerCodeRef.h
</span><span class="cx">     assembler/MacroAssemblerHelpers.h
</span><span class="cx">     assembler/MacroAssemblerMIPS.h
</span><ins>+    assembler/MacroAssemblerRISCV64.h
</ins><span class="cx">     assembler/MacroAssemblerX86Common.h
</span><span class="cx">     assembler/MacroAssemblerX86_64.h
</span><span class="cx">     assembler/Printer.h
</span><ins>+    assembler/RISCV64Assembler.h
+    assembler/RISCV64Registers.h
</ins><span class="cx">     assembler/RegisterInfo.h
</span><span class="cx">     assembler/X86Assembler.h
</span><span class="cx">     assembler/X86Registers.h
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ChangeLog (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ChangeLog    2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/ChangeLog       2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -1,3 +1,107 @@
</span><ins>+2021-08-30  Zan Dobersek  <zdobersek@igalia.com>
+
+        RISCV64 support in LLInt
+        https://bugs.webkit.org/show_bug.cgi?id=229035
+        <rdar://problem/82120908>
+
+        Reviewed by Yusuke Suzuki.
+
+        Introduce RISCV64 support at the LLint level. Along with the necessary
+        offlineasm backend, plenty of miscellaneous code around MacroAssembler
+        infrastructure is also introduced.
+
+        Of the existing supported architectures, RISCV64 is most similar to
+        ARM64, with the same word size and similar register abundance. This is
+        mirrored in most changes around the MacroAssembler infrastructure as
+        well as using same or similar codepaths in LLint for the two ISAs.
+
+        For the MacroAssembler infrastructure, it won't be used until proper JIT
+        support is introduced, but the basic facilities are still necessary to
+        keep things compiling without complicating the configuration matrix.
+        MacroAssemblerRISCV64 class provides no-op methods through C++ templates
+        while RISCV64Assembler is also added in a limited form.
+
+        The riscv64 offlineasm backend covers assembly generation for
+        instructions that are exhibited by LLInt in the current configuration.
+        It doesn't cover instructions that e.g. are only used in the WebAssembly
+        opcodes, and WebAssembly won't be enabled until the higher JIT tiers are
+        supported anyway.
+
+        The offlineasm backend's assembly generation for specific instructions
+        uses pattern matching of operand types for better overview of how
+        resulting assembly is constructed. There's still certain improvements
+        possible, e.g. in how scratch registers for more expansive operations
+        are allocated.
+
+        * CMakeLists.txt:
+        * Sources.txt:
+        * assembler/AbstractMacroAssembler.h:
+        * assembler/MacroAssembler.h:
+        * assembler/MacroAssemblerRISCV64.cpp: Added.
+        (JSC::MacroAssembler::probe):
+        * assembler/MacroAssemblerRISCV64.h: Added.
+        Distorted auto-generated method list removed. Necessary methods are
+        introduced through no-op templates until actually needed for JIT
+        generation.
+        * assembler/MaxFrameExtentForSlowPathCall.h:
+        * assembler/PerfLog.cpp:
+        * assembler/ProbeContext.h:
+        * assembler/RISCV64Assembler.h: Added.
+        (JSC::RISCV64Assembler::firstRegister):
+        (JSC::RISCV64Assembler::lastRegister):
+        (JSC::RISCV64Assembler::numberOfRegisters):
+        (JSC::RISCV64Assembler::firstSPRegister):
+        (JSC::RISCV64Assembler::lastSPRegister):
+        (JSC::RISCV64Assembler::numberOfSPRegisters):
+        (JSC::RISCV64Assembler::firstFPRegister):
+        (JSC::RISCV64Assembler::lastFPRegister):
+        (JSC::RISCV64Assembler::numberOfFPRegisters):
+        (JSC::RISCV64Assembler::gprName):
+        (JSC::RISCV64Assembler::sprName):
+        (JSC::RISCV64Assembler::fprName):
+        (JSC::RISCV64Assembler::RISCV64Assembler):
+        (JSC::RISCV64Assembler::buffer):
+        (JSC::RISCV64Assembler::invert):
+        (JSC::RISCV64Assembler::getRelocatedAddress):
+        (JSC::RISCV64Assembler::codeSize const):
+        (JSC::RISCV64Assembler::getCallReturnOffset):
+        (JSC::RISCV64Assembler::labelIgnoringWatchpoints):
+        (JSC::RISCV64Assembler::labelForWatchpoint):
+        (JSC::RISCV64Assembler::label):
+        (JSC::RISCV64Assembler::linkJump):
+        (JSC::RISCV64Assembler::linkCall):
+        (JSC::RISCV64Assembler::linkPointer):
+        (JSC::RISCV64Assembler::maxJumpReplacementSize):
+        (JSC::RISCV64Assembler::patchableJumpSize):
+        (JSC::RISCV64Assembler::repatchPointer):
+        (JSC::RISCV64Assembler::relinkJump):
+        (JSC::RISCV64Assembler::relinkJumpToNop):
+        (JSC::RISCV64Assembler::relinkCall):
+        (JSC::RISCV64Assembler::debugOffset):
+        (JSC::RISCV64Assembler::cacheFlush):
+        (JSC::RISCV64Assembler::fillNops):
+        * assembler/RISCV64Registers.h: Added.
+        * jit/FPRInfo.h:
+        (JSC::FPRInfo::toRegister):
+        (JSC::FPRInfo::toArgumentRegister):
+        (JSC::FPRInfo::toIndex):
+        (JSC::FPRInfo::debugName):
+        * jit/GPRInfo.h:
+        (JSC::GPRInfo::toRegister):
+        (JSC::GPRInfo::toArgumentRegister):
+        (JSC::GPRInfo::toIndex):
+        (JSC::GPRInfo::debugName):
+        * jit/RegisterSet.cpp:
+        (JSC::RegisterSet::vmCalleeSaveRegisters):
+        (JSC::RegisterSet::llintBaselineCalleeSaveRegisters):
+        * llint/LLIntData.h:
+        * llint/LLIntOfflineAsmConfig.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * offlineasm/backends.rb: Reference the riscv64 backend as required.
+        * offlineasm/registers.rb: List additional possible registers.
+        * offlineasm/riscv64.rb: Added.
+
</ins><span class="cx"> 2021-08-29  Keith Miller  <keith_miller@apple.com>
</span><span class="cx"> 
</span><span class="cx">         Add openFile function to jsc.cpp that links to file backed memory
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreSourcestxt"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/Sources.txt (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/Sources.txt  2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/Sources.txt     2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -55,6 +55,7 @@
</span><span class="cx"> assembler/MacroAssemblerCodeRef.cpp
</span><span class="cx"> assembler/MacroAssemblerMIPS.cpp
</span><span class="cx"> assembler/MacroAssemblerPrinter.cpp
</span><ins>+assembler/MacroAssemblerRISCV64.cpp
</ins><span class="cx"> assembler/MacroAssemblerX86Common.cpp
</span><span class="cx"> assembler/PerfLog.cpp
</span><span class="cx"> assembler/Printer.cpp
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerAbstractMacroAssemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h   2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h      2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -397,7 +397,7 @@
</span><span class="cx">         {
</span><span class="cx">         }
</span><span class="cx"> 
</span><del>-#if CPU(X86_64) || CPU(ARM64)
</del><ins>+#if CPU(X86_64) || CPU(ARM64) || CPU(RISCV64)
</ins><span class="cx">         explicit TrustedImm64(TrustedImmPtr ptr)
</span><span class="cx">             : m_value(ptr.asIntptr())
</span><span class="cx">         {
</span><span class="lines">@@ -413,7 +413,7 @@
</span><span class="cx">             : TrustedImm64(value)
</span><span class="cx">         {
</span><span class="cx">         }
</span><del>-#if CPU(X86_64) || CPU(ARM64)
</del><ins>+#if CPU(X86_64) || CPU(ARM64) || CPU(RISCV64)
</ins><span class="cx">         explicit Imm64(TrustedImmPtr ptr)
</span><span class="cx">             : TrustedImm64(ptr)
</span><span class="cx">         {
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerMacroAssemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/MacroAssembler.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/MacroAssembler.h   2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/assembler/MacroAssembler.h      2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -55,6 +55,11 @@
</span><span class="cx"> #define TARGET_MACROASSEMBLER MacroAssemblerX86_64
</span><span class="cx"> #include "MacroAssemblerX86_64.h"
</span><span class="cx"> 
</span><ins>+#elif CPU(RISCV64)
+#define TARGET_ASSEMBLER RISCV64Assembler
+#define TARGET_MACROASSEMBLER MacroAssemblerRISCV64
+#include "MacroAssemblerRISCV64.h"
+
</ins><span class="cx"> #else
</span><span class="cx"> #error "The MacroAssembler is not supported on this platform."
</span><span class="cx"> #endif
</span><span class="lines">@@ -132,7 +137,7 @@
</span><span class="cx">     using MacroAssemblerBase::and32;
</span><span class="cx">     using MacroAssemblerBase::branchAdd32;
</span><span class="cx">     using MacroAssemblerBase::branchMul32;
</span><del>-#if CPU(ARM64) || CPU(ARM_THUMB2) || CPU(X86_64) || CPU(MIPS)
</del><ins>+#if CPU(ARM64) || CPU(ARM_THUMB2) || CPU(X86_64) || CPU(MIPS) || CPU(RISCV64)
</ins><span class="cx">     using MacroAssemblerBase::branchPtr;
</span><span class="cx"> #endif
</span><span class="cx">     using MacroAssemblerBase::branchSub32;
</span><span class="lines">@@ -144,7 +149,7 @@
</span><span class="cx">     using MacroAssemblerBase::urshift32;
</span><span class="cx">     using MacroAssemblerBase::xor32;
</span><span class="cx"> 
</span><del>-#if CPU(ARM64) || CPU(X86_64)
</del><ins>+#if CPU(ARM64) || CPU(X86_64) || CPU(RISCV64)
</ins><span class="cx">     using MacroAssemblerBase::and64;
</span><span class="cx">     using MacroAssemblerBase::convertInt32ToDouble;
</span><span class="cx">     using MacroAssemblerBase::store64;
</span><span class="lines">@@ -335,7 +340,7 @@
</span><span class="cx">     static constexpr ptrdiff_t pushToSaveByteOffset() { return sizeof(void*); }
</span><span class="cx"> #endif // !CPU(ARM64)
</span><span class="cx"> 
</span><del>-#if CPU(X86_64) || CPU(ARM64)
</del><ins>+#if CPU(X86_64) || CPU(ARM64) || CPU(RISCV64)
</ins><span class="cx">     void peek64(RegisterID dest, int index = 0)
</span><span class="cx">     {
</span><span class="cx">         load64(Address(stackPointerRegister, (index * sizeof(void*))), dest);
</span><span class="lines">@@ -1706,7 +1711,7 @@
</span><span class="cx">         storePtr(value, addressForPoke(index));
</span><span class="cx">     }
</span><span class="cx">     
</span><del>-#if CPU(X86_64) || CPU(ARM64)
</del><ins>+#if CPU(X86_64) || CPU(ARM64) || CPU(RISCV64)
</ins><span class="cx">     void poke(Imm64 value, int index = 0)
</span><span class="cx">     {
</span><span class="cx">         store64(value, addressForPoke(index));
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerMacroAssemblerRISCV64cpp"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.cpp (0 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.cpp                          (rev 0)
+++ trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.cpp     2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -0,0 +1,42 @@
</span><ins>+/*
+ * Copyright (C) 2021 Igalia S.L.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "MacroAssembler.h"
+
+#if ENABLE(ASSEMBLER) && CPU(RISCV64)
+
+#include "ProbeContext.h"
+
+namespace JSC {
+
+void MacroAssembler::probe(Probe::Function, void*)
+{
+    // TODO
+}
+
+} // namespace JSC
+
+#endif // ENABLE(ASSEMBLER) && CPU(RISCV64)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerMacroAssemblerRISCV64h"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.h (0 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.h                            (rev 0)
+++ trunk/Source/JavaScriptCore/assembler/MacroAssemblerRISCV64.h       2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -0,0 +1,398 @@
</span><ins>+/*
+ * Copyright (C) 2021 Igalia S.L.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#if ENABLE(ASSEMBLER) && CPU(RISCV64)
+
+#include "AbstractMacroAssembler.h"
+#include "RISCV64Assembler.h"
+
+#define MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(methodName) \
+    template<typename... Args> void methodName(Args&&...) { }
+#define MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(methodName, returnType) \
+    template<typename... Args> returnType methodName(Args&&...) { return { }; }
+
+namespace JSC {
+
+using Assembler = TARGET_ASSEMBLER;
+
+class MacroAssemblerRISCV64 : public AbstractMacroAssembler<Assembler> {
+public:
+    static constexpr unsigned numGPRs = 32;
+    static constexpr unsigned numFPRs = 32;
+
+    static constexpr RegisterID dataTempRegister = RISCV64Registers::x30;
+    static constexpr RegisterID memoryTempRegister = RISCV64Registers::x31;
+
+    static constexpr RegisterID InvalidGPRReg = RISCV64Registers::InvalidGPRReg;
+
+    RegisterID scratchRegister()
+    {
+        return dataTempRegister;
+    }
+
+    static bool supportsFloatingPoint() { return true; }
+    static bool supportsFloatingPointTruncate() { return true; }
+    static bool supportsFloatingPointSqrt() { return true; }
+    static bool supportsFloatingPointAbs() { return true; }
+    static bool supportsFloatingPointRounding() { return true; }
+
+    enum RelationalCondition {
+        Equal,
+        NotEqual,
+        Above,
+        AboveOrEqual,
+        Below,
+        BelowOrEqual,
+        GreaterThan,
+        GreaterThanOrEqual,
+        LessThan,
+        LessThanOrEqual,
+    };
+
+    enum ResultCondition {
+        Overflow,
+        Signed,
+        PositiveOrZero,
+        Zero,
+        NonZero,
+    };
+
+    enum ZeroCondition {
+        IsZero,
+        IsNonZero,
+    };
+
+    enum DoubleCondition {
+        DoubleEqualAndOrdered,
+        DoubleNotEqualAndOrdered,
+        DoubleGreaterThanAndOrdered,
+        DoubleGreaterThanOrEqualAndOrdered,
+        DoubleLessThanAndOrdered,
+        DoubleLessThanOrEqualAndOrdered,
+        DoubleEqualOrUnordered,
+        DoubleNotEqualOrUnordered,
+        DoubleGreaterThanOrUnordered,
+        DoubleGreaterThanOrEqualOrUnordered,
+        DoubleLessThanOrUnordered,
+        DoubleLessThanOrEqualOrUnordered,
+    };
+
+    static constexpr RegisterID stackPointerRegister = RISCV64Registers::sp;
+    static constexpr RegisterID framePointerRegister = RISCV64Registers::fp;
+    static constexpr RegisterID linkRegister = RISCV64Registers::ra;
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(add32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(add64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(sub32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(sub64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(mul32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(mul64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(and32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(and64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(countLeadingZeros32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(countLeadingZeros64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(countTrailingZeros32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(countTrailingZeros64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(byteSwap16);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(byteSwap32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(byteSwap64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(lshift32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(lshift64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(rshift32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(rshift64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(urshift32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(urshift64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(rotateRight32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(rotateRight64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load8);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load8SignedExtendTo32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load16);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load16Unaligned);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load16SignedExtendTo32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load32WithUnalignedHalfWords);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(load64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(store8);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(store16);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(store32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(store64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(zeroExtend8To32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(zeroExtend16To32);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(signExtend8To32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(signExtend16To32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(signExtend32ToPtr);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(zeroExtend32ToWord);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(load64WithAddressOffsetPatch, DataLabel32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(load64WithCompactAddressOffsetPatch, DataLabelCompact);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(store64WithAddressOffsetPatch, DataLabel32);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(or8);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(or16);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(or32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(or64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(xor32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(xor64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(not32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(not64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(neg32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(neg64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(move);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(swap);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(moveZeroToDouble);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(moveFloatTo32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(move32ToFloat);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(moveDouble);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(moveDoubleTo64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(move64ToDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(compare8);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(compare32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(compare64);
+
+    static RelationalCondition invert(RelationalCondition cond)
+    {
+        return static_cast<RelationalCondition>(Assembler::invert(static_cast<Assembler::Condition>(cond)));
+    }
+
+    template<PtrTag resultTag, PtrTag locationTag>
+    static FunctionPtr<resultTag> readCallTarget(CodeLocationCall<locationTag>) { return { }; }
+
+    template<PtrTag tag>
+    static void replaceWithVMHalt(CodeLocationLabel<tag>) { }
+
+    template<PtrTag startTag, PtrTag destTag>
+    static void replaceWithJump(CodeLocationLabel<startTag>, CodeLocationLabel<destTag>) { }
+
+    static ptrdiff_t maxJumpReplacementSize()
+    {
+        return Assembler::maxJumpReplacementSize();
+    }
+
+    static ptrdiff_t patchableJumpSize()
+    {
+        return Assembler::patchableJumpSize();
+    }
+
+    template<PtrTag tag>
+    static CodeLocationLabel<tag> startOfBranchPtrWithPatchOnRegister(CodeLocationDataLabelPtr<tag>) { return { }; }
+
+    template<PtrTag tag>
+    static CodeLocationLabel<tag> startOfPatchableBranchPtrWithPatchOnAddress(CodeLocationDataLabelPtr<tag>) { return { }; }
+
+    template<PtrTag tag>
+    static CodeLocationLabel<tag> startOfPatchableBranch32WithPatchOnAddress(CodeLocationDataLabel32<tag>) { return { }; }
+
+    template<PtrTag tag>
+    static void revertJumpReplacementToBranchPtrWithPatch(CodeLocationLabel<tag>, RegisterID, void*) { }
+
+    template<PtrTag tag>
+    static void revertJumpReplacementToPatchableBranchPtrWithPatch(CodeLocationLabel<tag>, Address, void*) { }
+
+    template<PtrTag tag>
+    static void revertJumpReplacementToPatchableBranch32WithPatch(CodeLocationLabel<tag>, Address, int32_t) { }
+
+    template<PtrTag callTag, PtrTag destTag>
+    static void repatchCall(CodeLocationCall<callTag>, CodeLocationLabel<destTag>) { }
+
+    template<PtrTag callTag, PtrTag destTag>
+    static void repatchCall(CodeLocationCall<callTag>, FunctionPtr<destTag>) { }
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(jump, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(farJump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(nearCall, Call);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(nearTailCall, Call);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(threadSafePatchableNearCall, Call);
+
+    void ret() { }
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(test8);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(test32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(test64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branch8, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branch32, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branch64, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branch32WithPatch, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branch32WithUnalignedHalfWords, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchAdd32, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchAdd64, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchSub32, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchSub64, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchMul32, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchMul64, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchNeg32, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchNeg64, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchTest8, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchTest32, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchTest64, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchPtr, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(moveWithPatch, DataLabel32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(moveWithPatch, DataLabelPtr);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchPtrWithPatch, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(patchableBranch8, PatchableJump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(patchableBranch32, PatchableJump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(patchableBranch64, PatchableJump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchFloat, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchDouble, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchDoubleNonZero, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchDoubleZeroOrNaN, Jump);
+
+    enum BranchTruncateType { BranchIfTruncateFailed, BranchIfTruncateSuccessful };
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchTruncateDoubleToInt32, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(branchConvertDoubleToInt32);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(call, Call);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(callOperation);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(getEffectiveAddress);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(loadFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(loadDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(storeFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(storeDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(addFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(addDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(subFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(subDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(mulFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(mulDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(divFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(divDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(sqrtFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(sqrtDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(absFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(absDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(ceilFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(ceilDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(floorFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(floorDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(andFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(andDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(orFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(orDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(negateFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(negateDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(roundTowardNearestIntFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(roundTowardNearestIntDouble);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(roundTowardZeroFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(roundTowardZeroDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(compareFloat, Jump);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(compareDouble, Jump);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(convertInt32ToFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(convertInt32ToDouble);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(convertInt64ToFloat);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(convertInt64ToDouble);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(convertFloatToDouble);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(convertDoubleToFloat);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateFloatToInt32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateFloatToUint32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateFloatToInt64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateFloatToUint64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateDoubleToInt32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateDoubleToUint32);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateDoubleToInt64);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(truncateDoubleToUint64);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(push);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(pushPair);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(pushToSave);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(pop);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(popPair);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(popToRestore);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD(abortWithReason);
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(storePtrWithPatch, DataLabelPtr);
+
+    void breakpoint(uint16_t = 0xc471) { }
+    void nop() { }
+
+    void memoryFence() { }
+    void storeFence() { }
+    void loadFence() { }
+
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchAtomicWeakCAS8, JumpList);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchAtomicWeakCAS16, JumpList);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchAtomicWeakCAS32, JumpList);
+    MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN(branchAtomicWeakCAS64, JumpList);
+
+    template<PtrTag tag>
+    static void linkCall(void*, Call, FunctionPtr<tag>) { }
+};
+
+} // namespace JSC
+
+#undef MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD
+#undef MACRO_ASSEMBLER_RISCV64_TEMPLATED_NOOP_METHOD_WITH_RETURN
+
+#endif // ENABLE(ASSEMBLER) && CPU(RISCV64)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerMaxFrameExtentForSlowPathCallh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/MaxFrameExtentForSlowPathCall.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/MaxFrameExtentForSlowPathCall.h    2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/assembler/MaxFrameExtentForSlowPathCall.h       2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -50,7 +50,7 @@
</span><span class="cx"> // 7 args on stack (28 bytes).
</span><span class="cx"> static constexpr size_t maxFrameExtentForSlowPathCall = 40;
</span><span class="cx"> 
</span><del>-#elif CPU(ARM64) || CPU(ARM64E)
</del><ins>+#elif CPU(ARM64) || CPU(ARM64E) || CPU(RISCV64)
</ins><span class="cx"> // All args in registers.
</span><span class="cx"> static constexpr size_t maxFrameExtentForSlowPathCall = 0;
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerPerfLogcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/PerfLog.cpp (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/PerfLog.cpp        2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/assembler/PerfLog.cpp   2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -77,6 +77,8 @@
</span><span class="cx"> #else
</span><span class="cx"> static constexpr uint32_t elfMachine = EM_MIPS;
</span><span class="cx"> #endif
</span><ins>+#elif CPU(RISCV64)
+static constexpr uint32_t elfMachine = EM_RISCV;
</ins><span class="cx"> #endif
</span><span class="cx"> 
</span><span class="cx"> } // namespace Constants
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerProbeContexth"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/ProbeContext.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/ProbeContext.h     2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/assembler/ProbeContext.h        2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -116,6 +116,8 @@
</span><span class="cx">     return *reinterpret_cast<void**>(&gpr(ARMRegisters::pc));
</span><span class="cx"> #elif CPU(MIPS)
</span><span class="cx">     return *reinterpret_cast<void**>(&spr(MIPSRegisters::pc));
</span><ins>+#elif CPU(RISCV64)
+    return *reinterpret_cast<void**>(&spr(RISCV64Registers::pc));
</ins><span class="cx"> #else
</span><span class="cx"> #error "Unsupported CPU"
</span><span class="cx"> #endif
</span><span class="lines">@@ -131,6 +133,8 @@
</span><span class="cx">     return *reinterpret_cast<void**>(&gpr(ARMRegisters::fp));
</span><span class="cx"> #elif CPU(MIPS)
</span><span class="cx">     return *reinterpret_cast<void**>(&gpr(MIPSRegisters::fp));
</span><ins>+#elif CPU(RISCV64)
+    return *reinterpret_cast<void**>(&gpr(RISCV64Registers::fp));
</ins><span class="cx"> #else
</span><span class="cx"> #error "Unsupported CPU"
</span><span class="cx"> #endif
</span><span class="lines">@@ -146,6 +150,8 @@
</span><span class="cx">     return *reinterpret_cast<void**>(&gpr(ARMRegisters::sp));
</span><span class="cx"> #elif CPU(MIPS)
</span><span class="cx">     return *reinterpret_cast<void**>(&gpr(MIPSRegisters::sp));
</span><ins>+#elif CPU(RISCV64)
+    return *reinterpret_cast<void**>(&gpr(RISCV64Registers::sp));
</ins><span class="cx"> #else
</span><span class="cx"> #error "Unsupported CPU"
</span><span class="cx"> #endif
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerRISCV64Assemblerh"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/assembler/RISCV64Assembler.h (0 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/RISCV64Assembler.h                         (rev 0)
+++ trunk/Source/JavaScriptCore/assembler/RISCV64Assembler.h    2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -0,0 +1,197 @@
</span><ins>+/*
+ * Copyright (C) 2021 Igalia S.L.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#if ENABLE(ASSEMBLER) && CPU(RISCV64)
+
+#include "AssemblerBuffer.h"
+#include "RISCV64Registers.h"
+
+namespace JSC {
+
+namespace RISCV64Registers {
+
+typedef enum : int8_t {
+#define REGISTER_ID(id, name, r, cs) id,
+    FOR_EACH_GP_REGISTER(REGISTER_ID)
+#undef REGISTER_ID
+
+#define REGISTER_ALIAS(id, name, alias) id = alias,
+    FOR_EACH_REGISTER_ALIAS(REGISTER_ALIAS)
+#undef REGISTER_ALIAS
+
+    InvalidGPRReg = -1,
+} RegisterID;
+
+typedef enum : int8_t {
+#define REGISTER_ID(id, name) id,
+    FOR_EACH_SP_REGISTER(REGISTER_ID)
+#undef REGISTER_ID
+
+    InvalidSPReg = -1,
+} SPRegisterID;
+
+typedef enum : int8_t {
+#define REGISTER_ID(id, name, r, cs) id,
+    FOR_EACH_FP_REGISTER(REGISTER_ID)
+#undef REGISTER_ID
+
+    InvalidFPRReg = -1,
+} FPRegisterID;
+
+} // namespace RISCV64Registers
+
+class RISCV64Assembler {
+public:
+    using RegisterID = RISCV64Registers::RegisterID;
+    using SPRegisterID = RISCV64Registers::SPRegisterID;
+    using FPRegisterID = RISCV64Registers::FPRegisterID;
+
+    static constexpr RegisterID firstRegister() { return RISCV64Registers::x0; }
+    static constexpr RegisterID lastRegister() { return RISCV64Registers::x31; }
+    static constexpr unsigned numberOfRegisters() { return lastRegister() - firstRegister() + 1; }
+
+    static constexpr SPRegisterID firstSPRegister() { return RISCV64Registers::pc; }
+    static constexpr SPRegisterID lastSPRegister() { return RISCV64Registers::pc; }
+    static constexpr unsigned numberOfSPRegisters() { return lastSPRegister() - firstSPRegister() + 1; }
+
+    static constexpr FPRegisterID firstFPRegister() { return RISCV64Registers::f0; }
+    static constexpr FPRegisterID lastFPRegister() { return RISCV64Registers::f31; }
+    static constexpr unsigned numberOfFPRegisters() { return lastFPRegister() - firstFPRegister() + 1; }
+
+    static const char* gprName(RegisterID id)
+    {
+        ASSERT(id >= firstRegister() && id <= lastRegister());
+        static const char* const nameForRegister[numberOfRegisters()] = {
+#define REGISTER_NAME(id, name, r, cs) name,
+            FOR_EACH_GP_REGISTER(REGISTER_NAME)
+#undef REGISTER_NAME
+        };
+        return nameForRegister[id];
+    }
+
+    static const char* sprName(SPRegisterID id)
+    {
+        ASSERT(id >= firstSPRegister() && id <= lastSPRegister());
+        static const char* const nameForRegister[numberOfSPRegisters()] = {
+#define REGISTER_NAME(id, name) name,
+            FOR_EACH_SP_REGISTER(REGISTER_NAME)
+#undef REGISTER_NAME
+        };
+        return nameForRegister[id];
+    }
+
+    static const char* fprName(FPRegisterID id)
+    {
+        ASSERT(id >= firstFPRegister() && id <= lastFPRegister());
+        static const char* const nameForRegister[numberOfFPRegisters()] = {
+#define REGISTER_NAME(id, name, r, cs) name,
+            FOR_EACH_FP_REGISTER(REGISTER_NAME)
+#undef REGISTER_NAME
+        };
+        return nameForRegister[id];
+    }
+
+    RISCV64Assembler() { }
+
+    AssemblerBuffer& buffer() { return m_buffer; }
+
+    typedef enum {
+        ConditionEQ,
+        ConditionNE,
+        ConditionHS, ConditionCS = ConditionHS,
+        ConditionLO, ConditionCC = ConditionLO,
+        ConditionMI,
+        ConditionPL,
+        ConditionVS,
+        ConditionVC,
+        ConditionHI,
+        ConditionLS,
+        ConditionGE,
+        ConditionLT,
+        ConditionGT,
+        ConditionLE,
+        ConditionAL,
+        ConditionInvalid
+    } Condition;
+
+    static Condition invert(Condition cond)
+    {
+        return static_cast<Condition>(cond ^ 1);
+    }
+
+    static void* getRelocatedAddress(void* code, AssemblerLabel label)
+    {
+        ASSERT(label.isSet());
+        return reinterpret_cast<void*>(reinterpret_cast<ptrdiff_t>(code) + label.offset());
+    }
+
+    size_t codeSize() const { return m_buffer.codeSize(); }
+
+    static unsigned getCallReturnOffset(AssemblerLabel) { return 0; }
+
+    AssemblerLabel labelIgnoringWatchpoints() { return { }; }
+    AssemblerLabel labelForWatchpoint() { return { }; }
+    AssemblerLabel label() { return { }; }
+
+    static void linkJump(void*, AssemblerLabel, void*) { }
+
+    static void linkJump(AssemblerLabel, AssemblerLabel) { }
+
+    static void linkCall(void*, AssemblerLabel, void*) { }
+
+    static void linkPointer(void*, AssemblerLabel, void*) { }
+
+    static ptrdiff_t maxJumpReplacementSize()
+    {
+        return 4;
+    }
+
+    static constexpr ptrdiff_t patchableJumpSize()
+    {
+        return 4;
+    }
+
+    static void repatchPointer(void*, void*) { }
+
+    static void relinkJump(void*, void*) { }
+    static void relinkJumpToNop(void*) { }
+    static void relinkCall(void*, void*) { }
+
+    unsigned debugOffset() { return m_buffer.debugOffset(); }
+
+    static void cacheFlush(void*, size_t) { }
+
+    using CopyFunction = void*(&)(void*, const void*, size_t);
+    template <CopyFunction copy>
+    static void fillNops(void*, size_t) { }
+
+    AssemblerBuffer m_buffer;
+};
+
+} // namespace JSC
+
+#endif // ENABLE(ASSEMBLER) && CPU(RISCV64)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerRISCV64Registersh"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/assembler/RISCV64Registers.h (0 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/RISCV64Registers.h                         (rev 0)
+++ trunk/Source/JavaScriptCore/assembler/RISCV64Registers.h    2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -0,0 +1,112 @@
</span><ins>+/*
+ * Copyright (C) 2021 Igalia S.L.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+// More on the RISC-V calling convention and registers:
+// https://riscv.org/wp-content/uploads/2015/01/riscv-calling.pdf
+// https://riscv.org/wp-content/uploads/2017/05/riscv-spec-v2.2.pdf (Chapter 20)
+
+#define RegisterNames RISCV64Registers
+
+#define FOR_EACH_GP_REGISTER(macro)    \
+    macro(x0, "x0", 1, 0)              \
+    macro(x1, "x1", 1, 0)              \
+    macro(x2, "x2", 1, 1)              \
+    macro(x3, "x3", 1, 0)              \
+    macro(x4, "x4", 1, 0)              \
+    macro(x5, "x5", 0, 0)              \
+    macro(x6, "x6", 0, 0)              \
+    macro(x7, "x7", 0, 0)              \
+    macro(x8, "x8", 1, 1)              \
+    macro(x9, "x9", 0, 1)              \
+    macro(x10, "x10", 0, 0)            \
+    macro(x11, "x11", 0, 0)            \
+    macro(x12, "x12", 0, 0)            \
+    macro(x13, "x13", 0, 0)            \
+    macro(x14, "x14", 0, 0)            \
+    macro(x15, "x15", 0, 0)            \
+    macro(x16, "x16", 0, 0)            \
+    macro(x17, "x17", 0, 0)            \
+    macro(x18, "x18", 0, 1)            \
+    macro(x19, "x19", 0, 1)            \
+    macro(x20, "x20", 0, 1)            \
+    macro(x21, "x21", 0, 1)            \
+    macro(x22, "x22", 0, 1)            \
+    macro(x23, "x23", 0, 1)            \
+    macro(x24, "x24", 0, 1)            \
+    macro(x25, "x25", 0, 1)            \
+    macro(x26, "x26", 0, 1)            \
+    macro(x27, "x27", 0, 1)            \
+    macro(x28, "x28", 0, 0)            \
+    macro(x29, "x29", 0, 0)            \
+/* MacroAssembler scratch registers */ \
+    macro(x30, "x30", 0, 0)            \
+    macro(x31, "x31", 0, 0)
+
+#define FOR_EACH_REGISTER_ALIAS(macro) \
+    macro(zero, "zero", x0)            \
+    macro(ra, "ra", x1)                \
+    macro(sp, "sp", x2)                \
+    macro(gp, "gp", x3)                \
+    macro(tp, "tp", x4)                \
+    macro(fp, "fp", x8)
+
+#define FOR_EACH_SP_REGISTER(macro) \
+    macro(pc, "pc")
+
+#define FOR_EACH_FP_REGISTER(macro) \
+    macro(f0, "f0", 0, 0)           \
+    macro(f1, "f1", 0, 0)           \
+    macro(f2, "f2", 0, 0)           \
+    macro(f3, "f3", 0, 0)           \
+    macro(f4, "f4", 0, 0)           \
+    macro(f5, "f5", 0, 0)           \
+    macro(f6, "f6", 0, 0)           \
+    macro(f7, "f7", 0, 0)           \
+    macro(f8, "f8", 0, 1)           \
+    macro(f9, "f9", 0, 1)           \
+    macro(f10, "f10", 0, 0)         \
+    macro(f11, "f11", 0, 0)         \
+    macro(f12, "f12", 0, 0)         \
+    macro(f13, "f13", 0, 0)         \
+    macro(f14, "f14", 0, 0)         \
+    macro(f15, "f15", 0, 0)         \
+    macro(f16, "f16", 0, 0)         \
+    macro(f17, "f17", 0, 0)         \
+    macro(f18, "f18", 0, 1)         \
+    macro(f19, "f19", 0, 1)         \
+    macro(f20, "f20", 0, 1)         \
+    macro(f21, "f21", 0, 1)         \
+    macro(f22, "f22", 0, 1)         \
+    macro(f23, "f23", 0, 1)         \
+    macro(f24, "f24", 0, 1)         \
+    macro(f25, "f25", 0, 1)         \
+    macro(f26, "f26", 0, 1)         \
+    macro(f27, "f27", 0, 1)         \
+    macro(f28, "f28", 0, 0)         \
+    macro(f29, "f29", 0, 0)         \
+    macro(f30, "f30", 0, 0)         \
+    macro(f31, "f31", 0, 0)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCorejitFPRInfoh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/FPRInfo.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/FPRInfo.h        2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/jit/FPRInfo.h   2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -329,6 +329,104 @@
</span><span class="cx"> 
</span><span class="cx"> #endif // CPU(MIPS)
</span><span class="cx"> 
</span><ins>+#if CPU(RISCV64)
+
+class FPRInfo {
+public:
+    typedef FPRReg RegisterType;
+    static constexpr unsigned numberOfRegisters = 20;
+    static constexpr unsigned numberOfArgumentRegisters = 8;
+
+    static constexpr FPRReg fpRegT0 = RISCV64Registers::f0;
+    static constexpr FPRReg fpRegT1 = RISCV64Registers::f1;
+    static constexpr FPRReg fpRegT2 = RISCV64Registers::f2;
+    static constexpr FPRReg fpRegT3 = RISCV64Registers::f3;
+    static constexpr FPRReg fpRegT4 = RISCV64Registers::f4;
+    static constexpr FPRReg fpRegT5 = RISCV64Registers::f5;
+    static constexpr FPRReg fpRegT6 = RISCV64Registers::f6;
+    static constexpr FPRReg fpRegT7 = RISCV64Registers::f7;
+    static constexpr FPRReg fpRegT8 = RISCV64Registers::f10;
+    static constexpr FPRReg fpRegT9 = RISCV64Registers::f11;
+    static constexpr FPRReg fpRegT10 = RISCV64Registers::f12;
+    static constexpr FPRReg fpRegT11 = RISCV64Registers::f13;
+    static constexpr FPRReg fpRegT12 = RISCV64Registers::f14;
+    static constexpr FPRReg fpRegT13 = RISCV64Registers::f15;
+    static constexpr FPRReg fpRegT14 = RISCV64Registers::f16;
+    static constexpr FPRReg fpRegT15 = RISCV64Registers::f17;
+    static constexpr FPRReg fpRegT16 = RISCV64Registers::f28;
+    static constexpr FPRReg fpRegT17 = RISCV64Registers::f29;
+    static constexpr FPRReg fpRegT18 = RISCV64Registers::f30;
+    static constexpr FPRReg fpRegT19 = RISCV64Registers::f31;
+
+    static constexpr FPRReg fpRegCS0 = RISCV64Registers::f8;
+    static constexpr FPRReg fpRegCS1 = RISCV64Registers::f9;
+    static constexpr FPRReg fpRegCS2 = RISCV64Registers::f18;
+    static constexpr FPRReg fpRegCS3 = RISCV64Registers::f19;
+    static constexpr FPRReg fpRegCS4 = RISCV64Registers::f20;
+    static constexpr FPRReg fpRegCS5 = RISCV64Registers::f21;
+    static constexpr FPRReg fpRegCS6 = RISCV64Registers::f22;
+    static constexpr FPRReg fpRegCS7 = RISCV64Registers::f23;
+    static constexpr FPRReg fpRegCS8 = RISCV64Registers::f24;
+    static constexpr FPRReg fpRegCS9 = RISCV64Registers::f25;
+    static constexpr FPRReg fpRegCS10 = RISCV64Registers::f26;
+    static constexpr FPRReg fpRegCS11 = RISCV64Registers::f27;
+
+    static constexpr FPRReg argumentFPR0 = RISCV64Registers::f10; // fpRegT8
+    static constexpr FPRReg argumentFPR1 = RISCV64Registers::f11; // fpRegT9
+    static constexpr FPRReg argumentFPR2 = RISCV64Registers::f12; // fpRegT10
+    static constexpr FPRReg argumentFPR3 = RISCV64Registers::f13; // fpRegT11
+    static constexpr FPRReg argumentFPR4 = RISCV64Registers::f14; // fpRegT12
+    static constexpr FPRReg argumentFPR5 = RISCV64Registers::f15; // fpRegT13
+    static constexpr FPRReg argumentFPR6 = RISCV64Registers::f16; // fpRegT14
+    static constexpr FPRReg argumentFPR7 = RISCV64Registers::f17; // fpRegT15
+
+    static constexpr FPRReg returnValueFPR = RISCV64Registers::f10; // fpRegT8
+
+    static FPRReg toRegister(unsigned index)
+    {
+        ASSERT(index < numberOfRegisters);
+        static const FPRReg registerForIndex[numberOfRegisters] = {
+            fpRegT0, fpRegT1, fpRegT2, fpRegT3, fpRegT4, fpRegT5, fpRegT6, fpRegT7,
+            fpRegT8, fpRegT9, fpRegT10, fpRegT11, fpRegT12, fpRegT13, fpRegT14, fpRegT15,
+            fpRegT16, fpRegT17, fpRegT18, fpRegT19,
+        };
+        return registerForIndex[index];
+    }
+
+    static FPRReg toArgumentRegister(unsigned index)
+    {
+        ASSERT(index < numberOfArgumentRegisters);
+        static const FPRReg registerForIndex[numberOfArgumentRegisters] = {
+            argumentFPR0, argumentFPR1, argumentFPR2, argumentFPR3,
+            argumentFPR4, argumentFPR5, argumentFPR6, argumentFPR7,
+        };
+        return registerForIndex[index];
+    }
+
+    static unsigned toIndex(FPRReg reg)
+    {
+        ASSERT(reg != InvalidFPRReg);
+        ASSERT(static_cast<int>(reg) < 32);
+        static const unsigned indexForRegister[32] = {
+            0, 1, 2, 3, 4, 5, 6, 7,
+            InvalidIndex, InvalidIndex, 8, 9, 10, 11, 12, 13,
+            14, 15, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex,
+            InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, 16, 17, 18, 19,
+        };
+        return indexForRegister[reg];
+    }
+
+    static const char* debugName(FPRReg reg)
+    {
+        ASSERT(reg != InvalidFPRReg);
+        return MacroAssembler::fprName(reg);
+    }
+
+    static constexpr unsigned InvalidIndex = 0xffffffff;
+};
+
+#endif // CPU(RISCV64)
+
</ins><span class="cx"> // We use this hack to get the FPRInfo from the FPRReg type in templates because our code is bad and we should feel bad..
</span><span class="cx"> constexpr FPRInfo toInfoFromReg(FPRReg) { return FPRInfo(); }
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitGPRInfoh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/GPRInfo.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/GPRInfo.h        2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/jit/GPRInfo.h   2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -798,6 +798,113 @@
</span><span class="cx"> 
</span><span class="cx"> #endif // CPU(MIPS)
</span><span class="cx"> 
</span><ins>+#if CPU(RISCV64)
+
+#define NUMBER_OF_ARGUMENT_REGISTERS 8u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 23u
+
+class GPRInfo {
+public:
+    typedef GPRReg RegisterType;
+    static constexpr unsigned numberOfRegisters = 13;
+    static constexpr unsigned numberOfArgumentRegisters = 8;
+
+    static constexpr GPRReg callFrameRegister = RISCV64Registers::fp;
+    static constexpr GPRReg numberTagRegister = RISCV64Registers::x25;
+    static constexpr GPRReg notCellMaskRegister = RISCV64Registers::x26;
+
+    static constexpr GPRReg regT0 = RISCV64Registers::x10;
+    static constexpr GPRReg regT1 = RISCV64Registers::x11;
+    static constexpr GPRReg regT2 = RISCV64Registers::x12;
+    static constexpr GPRReg regT3 = RISCV64Registers::x13;
+    static constexpr GPRReg regT4 = RISCV64Registers::x14;
+    static constexpr GPRReg regT5 = RISCV64Registers::x15;
+    static constexpr GPRReg regT6 = RISCV64Registers::x16;
+    static constexpr GPRReg regT7 = RISCV64Registers::x17;
+    static constexpr GPRReg regT8 = RISCV64Registers::x5;
+    static constexpr GPRReg regT9 = RISCV64Registers::x6;
+    static constexpr GPRReg regT10 = RISCV64Registers::x7;
+    static constexpr GPRReg regT11 = RISCV64Registers::x28;
+    static constexpr GPRReg regT12 = RISCV64Registers::x29;
+
+    static constexpr GPRReg regCS0 = RISCV64Registers::x9;
+    static constexpr GPRReg regCS1 = RISCV64Registers::x18;
+    static constexpr GPRReg regCS2 = RISCV64Registers::x19;
+    static constexpr GPRReg regCS3 = RISCV64Registers::x20;
+    static constexpr GPRReg regCS4 = RISCV64Registers::x21;
+    static constexpr GPRReg regCS5 = RISCV64Registers::x22;
+    static constexpr GPRReg regCS6 = RISCV64Registers::x23;
+    static constexpr GPRReg regCS7 = RISCV64Registers::x24;
+    static constexpr GPRReg regCS8 = RISCV64Registers::x25; // numberTag
+    static constexpr GPRReg regCS9 = RISCV64Registers::x26; // notCellMask
+    static constexpr GPRReg regCS10 = RISCV64Registers::x27;
+
+    static constexpr GPRReg argumentGPR0 = RISCV64Registers::x10; // regT0
+    static constexpr GPRReg argumentGPR1 = RISCV64Registers::x11; // regT1
+    static constexpr GPRReg argumentGPR2 = RISCV64Registers::x12; // regT2
+    static constexpr GPRReg argumentGPR3 = RISCV64Registers::x13; // regT3
+    static constexpr GPRReg argumentGPR4 = RISCV64Registers::x14; // regT4
+    static constexpr GPRReg argumentGPR5 = RISCV64Registers::x15; // regT5
+    static constexpr GPRReg argumentGPR6 = RISCV64Registers::x16; // regT6
+    static constexpr GPRReg argumentGPR7 = RISCV64Registers::x17; // regT7
+
+    static constexpr GPRReg nonArgGPR0 = RISCV64Registers::x5; // regT8
+    static constexpr GPRReg nonArgGPR1 = RISCV64Registers::x6; // regT9
+
+    static constexpr GPRReg returnValueGPR = RISCV64Registers::x10; // regT0
+    static constexpr GPRReg returnValueGPR2 = RISCV64Registers::x11; // regT1
+
+    static constexpr GPRReg nonPreservedNonReturnGPR = RISCV64Registers::x12; // regT2
+    static constexpr GPRReg nonPreservedNonArgumentGPR0 = RISCV64Registers::x5; // regT8
+    static constexpr GPRReg nonPreservedNonArgumentGPR1 = RISCV64Registers::x6; // regT9
+
+    static constexpr GPRReg wasmScratchGPR0 = RISCV64Registers::x6; // regT9
+    static constexpr GPRReg wasmScratchGPR1 = RISCV64Registers::x7; // regT10
+
+    static GPRReg toRegister(unsigned index)
+    {
+        ASSERT(index < numberOfRegisters);
+        static const GPRReg registerForIndex[numberOfRegisters] = {
+            regT0, regT1, regT2, regT3, regT4, regT5, regT6, regT7,
+            regT8, regT9, regT10, regT11, regT12,
+        };
+        return registerForIndex[index];
+    }
+
+    static GPRReg toArgumentRegister(unsigned index)
+    {
+        ASSERT(index < numberOfArgumentRegisters);
+        static const GPRReg registerForIndex[numberOfArgumentRegisters] = {
+            argumentGPR0, argumentGPR1, argumentGPR2, argumentGPR3,
+            argumentGPR4, argumentGPR5, argumentGPR6, argumentGPR7,
+        };
+        return registerForIndex[index];
+    }
+
+    static unsigned toIndex(GPRReg reg)
+    {
+        ASSERT(reg != InvalidGPRReg);
+        ASSERT(static_cast<int>(reg) < 32);
+        static const unsigned indexForRegister[32] = {
+            InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, 8, 9, 10,
+            InvalidIndex, InvalidIndex, 0, 1, 2, 3, 4, 5,
+            6, 7, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex,
+            InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, 11, 12, InvalidIndex, InvalidIndex,
+        };
+        return indexForRegister[reg];
+    }
+
+    static const char* debugName(GPRReg reg)
+    {
+        ASSERT(reg != InvalidGPRReg);
+        return MacroAssembler::gprName(reg);
+    }
+
+    static constexpr unsigned InvalidIndex = 0xffffffff;
+};
+
+#endif // CPU(RISCV64)
+
</ins><span class="cx"> // The baseline JIT uses "accumulator" style execution with regT0 (for 64-bit)
</span><span class="cx"> // and regT0 + regT1 (for 32-bit) serving as the accumulator register(s) for
</span><span class="cx"> // passing results of one opcode to the next. Hence:
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitRegisterSetcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/RegisterSet.cpp (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/RegisterSet.cpp  2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/jit/RegisterSet.cpp     2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -155,6 +155,30 @@
</span><span class="cx"> #elif CPU(ARM_THUMB2) || CPU(MIPS)
</span><span class="cx">     result.set(GPRInfo::regCS0);
</span><span class="cx">     result.set(GPRInfo::regCS1);
</span><ins>+#elif CPU(RISCV64)
+    result.set(GPRInfo::regCS0);
+    result.set(GPRInfo::regCS1);
+    result.set(GPRInfo::regCS2);
+    result.set(GPRInfo::regCS3);
+    result.set(GPRInfo::regCS4);
+    result.set(GPRInfo::regCS5);
+    result.set(GPRInfo::regCS6);
+    result.set(GPRInfo::regCS7);
+    result.set(GPRInfo::regCS8);
+    result.set(GPRInfo::regCS9);
+    result.set(GPRInfo::regCS10);
+    result.set(FPRInfo::fpRegCS0);
+    result.set(FPRInfo::fpRegCS1);
+    result.set(FPRInfo::fpRegCS2);
+    result.set(FPRInfo::fpRegCS3);
+    result.set(FPRInfo::fpRegCS4);
+    result.set(FPRInfo::fpRegCS5);
+    result.set(FPRInfo::fpRegCS6);
+    result.set(FPRInfo::fpRegCS7);
+    result.set(FPRInfo::fpRegCS8);
+    result.set(FPRInfo::fpRegCS9);
+    result.set(FPRInfo::fpRegCS10);
+    result.set(FPRInfo::fpRegCS11);
</ins><span class="cx"> #endif
</span><span class="cx">     return result;
</span><span class="cx"> }
</span><span class="lines">@@ -192,7 +216,7 @@
</span><span class="cx"> #elif CPU(ARM_THUMB2)
</span><span class="cx">     result.set(GPRInfo::regCS0);
</span><span class="cx">     result.set(GPRInfo::regCS1);
</span><del>-#elif CPU(ARM64)
</del><ins>+#elif CPU(ARM64) || CPU(RISCV64)
</ins><span class="cx">     result.set(GPRInfo::regCS6);
</span><span class="cx">     result.set(GPRInfo::regCS7);
</span><span class="cx">     static_assert(GPRInfo::regCS8 == GPRInfo::numberTagRegister, "");
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorellintLLIntDatah"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/llint/LLIntData.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/llint/LLIntData.h    2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/llint/LLIntData.h       2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -381,7 +381,7 @@
</span><span class="cx"> #elif CPU(X86_64) && OS(WINDOWS)
</span><span class="cx">     static constexpr GPRReg metadataTableGPR = GPRInfo::regCS3;
</span><span class="cx">     static constexpr GPRReg pbGPR = GPRInfo::regCS4;
</span><del>-#elif CPU(ARM64)
</del><ins>+#elif CPU(ARM64) || CPU(RISCV64)
</ins><span class="cx">     static constexpr GPRReg metadataTableGPR = GPRInfo::regCS6;
</span><span class="cx">     static constexpr GPRReg pbGPR = GPRInfo::regCS7;
</span><span class="cx"> #elif CPU(MIPS) || CPU(ARM_THUMB2)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorellintLLIntOfflineAsmConfigh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h        2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h   2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -47,6 +47,7 @@
</span><span class="cx"> #define OFFLINE_ASM_ARMv7k 0
</span><span class="cx"> #define OFFLINE_ASM_ARMv7s 0
</span><span class="cx"> #define OFFLINE_ASM_MIPS 0
</span><ins>+#define OFFLINE_ASM_RISCV64 0
</ins><span class="cx"> 
</span><span class="cx"> #else // ENABLE(C_LOOP)
</span><span class="cx"> 
</span><span class="lines">@@ -115,6 +116,12 @@
</span><span class="cx"> #define OFFLINE_ASM_ARM64E 0
</span><span class="cx"> #endif
</span><span class="cx"> 
</span><ins>+#if CPU(RISCV64)
+#define OFFLINE_ASM_RISCV64 1
+#else
+#define OFFLINE_ASM_RISCV64 0
+#endif
+
</ins><span class="cx"> #if CPU(MIPS)
</span><span class="cx"> #ifdef WTF_MIPS_PIC
</span><span class="cx"> #define S(x) #x
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorellintLowLevelInterpreterasm"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm        2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm   2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -219,7 +219,7 @@
</span><span class="cx"> 
</span><span class="cx"> const maxFrameExtentForSlowPathCall = constexpr maxFrameExtentForSlowPathCall
</span><span class="cx"> 
</span><del>-if X86_64 or X86_64_WIN or ARM64 or ARM64E
</del><ins>+if X86_64 or X86_64_WIN or ARM64 or ARM64E or RISCV64
</ins><span class="cx">     const CalleeSaveSpaceAsVirtualRegisters = 4
</span><span class="cx"> elsif C_LOOP or C_LOOP_WIN
</span><span class="cx">     const CalleeSaveSpaceAsVirtualRegisters = 1
</span><span class="lines">@@ -276,7 +276,7 @@
</span><span class="cx"> #   This requires an add before the call, and a sub after.
</span><span class="cx"> if JSVALUE64
</span><span class="cx">     const PC = t4 # When changing this, make sure LLIntPC is up to date in LLIntPCRanges.h
</span><del>-    if ARM64 or ARM64E
</del><ins>+    if ARM64 or ARM64E or RISCV64
</ins><span class="cx">         const metadataTable = csr6
</span><span class="cx">         const PB = csr7
</span><span class="cx">         const numberTag = csr8
</span><span class="lines">@@ -730,7 +730,7 @@
</span><span class="cx">     end
</span><span class="cx"> end
</span><span class="cx"> 
</span><del>-if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
</del><ins>+if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN or RISCV64
</ins><span class="cx">     const CalleeSaveRegisterCount = 0
</span><span class="cx"> elsif ARMv7
</span><span class="cx">     const CalleeSaveRegisterCount = 7
</span><span class="lines">@@ -747,7 +747,7 @@
</span><span class="cx"> const VMEntryTotalFrameSize = (CalleeRegisterSaveSize + sizeof VMEntryRecord + StackAlignment - 1) & ~StackAlignmentMask
</span><span class="cx"> 
</span><span class="cx"> macro pushCalleeSaves()
</span><del>-    if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
</del><ins>+    if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN or RISCV64
</ins><span class="cx">     elsif ARMv7
</span><span class="cx">         emit "push {r4-r6, r8-r11}"
</span><span class="cx">     elsif MIPS
</span><span class="lines">@@ -769,7 +769,7 @@
</span><span class="cx"> end
</span><span class="cx"> 
</span><span class="cx"> macro popCalleeSaves()
</span><del>-    if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
</del><ins>+    if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN or RISCV64
</ins><span class="cx">     elsif ARMv7
</span><span class="cx">         emit "pop {r4-r6, r8-r11}"
</span><span class="cx">     elsif MIPS
</span><span class="lines">@@ -794,7 +794,7 @@
</span><span class="cx">         push cfr
</span><span class="cx">     elsif X86 or X86_WIN or X86_64 or X86_64_WIN
</span><span class="cx">         push cfr
</span><del>-    elsif ARM64 or ARM64E
</del><ins>+    elsif ARM64 or ARM64E or RISCV64
</ins><span class="cx">         push cfr, lr
</span><span class="cx">     else
</span><span class="cx">         error
</span><span class="lines">@@ -809,7 +809,7 @@
</span><span class="cx">         pop lr
</span><span class="cx">     elsif X86 or X86_WIN or X86_64 or X86_64_WIN
</span><span class="cx">         pop cfr
</span><del>-    elsif ARM64 or ARM64E
</del><ins>+    elsif ARM64 or ARM64E or RISCV64
</ins><span class="cx">         pop lr, cfr
</span><span class="cx">     end
</span><span class="cx"> end
</span><span class="lines">@@ -847,6 +847,11 @@
</span><span class="cx">         storep csr5, -16[cfr]
</span><span class="cx">         storep csr4, -24[cfr]
</span><span class="cx">         storep csr3, -32[cfr]
</span><ins>+    elsif RISCV64
+        storep csr9, -8[cfr]
+        storep csr8, -16[cfr]
+        storep csr7, -24[cfr]
+        storep csr6, -32[cfr]
</ins><span class="cx">     end
</span><span class="cx"> end
</span><span class="cx"> 
</span><span class="lines">@@ -876,11 +881,16 @@
</span><span class="cx">         loadp -24[cfr], csr4
</span><span class="cx">         loadp -16[cfr], csr5
</span><span class="cx">         loadp -8[cfr], csr6
</span><ins>+    elsif RISCV64
+        loadp -32[cfr], csr6
+        loadp -24[cfr], csr7
+        loadp -16[cfr], csr8
+        loadp -8[cfr], csr9
</ins><span class="cx">     end
</span><span class="cx"> end
</span><span class="cx"> 
</span><span class="cx"> macro copyCalleeSavesToEntryFrameCalleeSavesBuffer(entryFrame)
</span><del>-    if ARM64 or ARM64E or X86_64 or X86_64_WIN or ARMv7 or MIPS
</del><ins>+    if ARM64 or ARM64E or X86_64 or X86_64_WIN or ARMv7 or MIPS or RISCV64
</ins><span class="cx">         vmEntryRecord(entryFrame, entryFrame)
</span><span class="cx">         leap VMEntryRecord::calleeSaveRegistersBuffer[entryFrame], entryFrame
</span><span class="cx">         if ARM64 or ARM64E
</span><span class="lines">@@ -919,12 +929,36 @@
</span><span class="cx">         elsif ARMv7 or MIPS
</span><span class="cx">             storep csr0, [entryFrame]
</span><span class="cx">             storep csr1, 4[entryFrame]
</span><ins>+        elsif RISCV64
+            storep csr0, [entryFrame]
+            storep csr1, 8[entryFrame]
+            storep csr2, 16[entryFrame]
+            storep csr3, 24[entryFrame]
+            storep csr4, 32[entryFrame]
+            storep csr5, 40[entryFrame]
+            storep csr6, 48[entryFrame]
+            storep csr7, 56[entryFrame]
+            storep csr8, 64[entryFrame]
+            storep csr9, 72[entryFrame]
+            storep csr10, 80[entryFrame]
+            stored csfr0, 88[entryFrame]
+            stored csfr1, 96[entryFrame]
+            stored csfr2, 104[entryFrame]
+            stored csfr3, 112[entryFrame]
+            stored csfr4, 120[entryFrame]
+            stored csfr5, 128[entryFrame]
+            stored csfr6, 136[entryFrame]
+            stored csfr7, 144[entryFrame]
+            stored csfr8, 152[entryFrame]
+            stored csfr9, 160[entryFrame]
+            stored csfr10, 168[entryFrame]
+            stored csfr11, 176[entryFrame]
</ins><span class="cx">         end
</span><span class="cx">     end
</span><span class="cx"> end
</span><span class="cx"> 
</span><span class="cx"> macro copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm, temp)
</span><del>-    if ARM64 or ARM64E or X86_64 or X86_64_WIN or ARMv7 or MIPS
</del><ins>+    if ARM64 or ARM64E or X86_64 or X86_64_WIN or ARMv7 or MIPS or RISCV64
</ins><span class="cx">         loadp VM::topEntryFrame[vm], temp
</span><span class="cx">         copyCalleeSavesToEntryFrameCalleeSavesBuffer(temp)
</span><span class="cx">     end
</span><span class="lines">@@ -931,7 +965,7 @@
</span><span class="cx"> end
</span><span class="cx"> 
</span><span class="cx"> macro restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(vm, temp)
</span><del>-    if ARM64 or ARM64E or X86_64 or X86_64_WIN or ARMv7 or MIPS
</del><ins>+    if ARM64 or ARM64E or X86_64 or X86_64_WIN or ARMv7 or MIPS or RISCV64
</ins><span class="cx">         loadp VM::topEntryFrame[vm], temp
</span><span class="cx">         vmEntryRecord(temp, temp)
</span><span class="cx">         leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp
</span><span class="lines">@@ -971,12 +1005,36 @@
</span><span class="cx">         elsif ARMv7 or MIPS
</span><span class="cx">             loadp [temp], csr0
</span><span class="cx">             loadp 4[temp], csr1
</span><ins>+        elsif RISCV64
+            loadq [temp], csr0
+            loadq 8[temp], csr1
+            loadq 16[temp], csr2
+            loadq 24[temp], csr3
+            loadq 32[temp], csr4
+            loadq 40[temp], csr5
+            loadq 48[temp], csr6
+            loadq 56[temp], csr7
+            loadq 64[temp], csr8
+            loadq 72[temp], csr9
+            loadq 80[temp], csr10
+            loadd 88[temp], csfr0
+            loadd 96[temp], csfr1
+            loadd 104[temp], csfr2
+            loadd 112[temp], csfr3
+            loadd 120[temp], csfr4
+            loadd 128[temp], csfr5
+            loadd 136[temp], csfr6
+            loadd 144[temp], csfr7
+            loadd 152[temp], csfr8
+            loadd 160[temp], csfr9
+            loadd 168[temp], csfr10
+            loadd 176[temp], csfr11
</ins><span class="cx">         end
</span><span class="cx">     end
</span><span class="cx"> end
</span><span class="cx"> 
</span><span class="cx"> macro preserveReturnAddressAfterCall(destinationRegister)
</span><del>-    if C_LOOP or C_LOOP_WIN or ARMv7 or ARM64 or ARM64E or MIPS
</del><ins>+    if C_LOOP or C_LOOP_WIN or ARMv7 or ARM64 or ARM64E or MIPS or RISCV64
</ins><span class="cx">         # In C_LOOP or C_LOOP_WIN case, we're only preserving the bytecode vPC.
</span><span class="cx">         move lr, destinationRegister
</span><span class="cx">     elsif X86 or X86_WIN or X86_64 or X86_64_WIN
</span><span class="lines">@@ -990,7 +1048,7 @@
</span><span class="cx">     tagReturnAddress sp
</span><span class="cx">     if X86 or X86_WIN or X86_64 or X86_64_WIN
</span><span class="cx">         push cfr
</span><del>-    elsif ARM64 or ARM64E
</del><ins>+    elsif ARM64 or ARM64E or RISCV64
</ins><span class="cx">         push cfr, lr
</span><span class="cx">     elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
</span><span class="cx">         push lr
</span><span class="lines">@@ -1002,7 +1060,7 @@
</span><span class="cx"> macro functionEpilogue()
</span><span class="cx">     if X86 or X86_WIN or X86_64 or X86_64_WIN
</span><span class="cx">         pop cfr
</span><del>-    elsif ARM64 or ARM64E
</del><ins>+    elsif ARM64 or ARM64E or RISCV64
</ins><span class="cx">         pop lr, cfr
</span><span class="cx">     elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
</span><span class="cx">         pop cfr
</span><span class="lines">@@ -1158,7 +1216,7 @@
</span><span class="cx">     addi StackAlignment - 1 + CallFrameHeaderSize, temp2
</span><span class="cx">     andi ~StackAlignmentMask, temp2
</span><span class="cx"> 
</span><del>-    if ARMv7 or ARM64 or ARM64E or C_LOOP or C_LOOP_WIN or MIPS
</del><ins>+    if ARMv7 or ARM64 or ARM64E or C_LOOP or C_LOOP_WIN or MIPS or RISCV64
</ins><span class="cx">         addp CallerFrameAndPCSize, sp
</span><span class="cx">         subi CallerFrameAndPCSize, temp2
</span><span class="cx">         loadp CallerFrameAndPC::returnPC[cfr], lr
</span><span class="lines">@@ -1454,7 +1512,7 @@
</span><span class="cx">         btpz r0, .recover
</span><span class="cx">         move cfr, sp # restore the previous sp
</span><span class="cx">         # pop the callerFrame since we will jump to a function that wants to save it
</span><del>-        if ARM64
</del><ins>+        if ARM64 or RISCV64
</ins><span class="cx">             pop lr, cfr
</span><span class="cx">         elsif ARM64E
</span><span class="cx">             # untagReturnAddress will be performed in Gate::entryOSREntry.
</span><span class="lines">@@ -1810,7 +1868,7 @@
</span><span class="cx">             leap (label - _%kind%_relativePCBase)[t3], t4
</span><span class="cx">             move index, t5
</span><span class="cx">             storep t4, [map, t5, 4]
</span><del>-        elsif ARM64
</del><ins>+        elsif ARM64 or RISCV64
</ins><span class="cx">             pcrtoaddr label, t3
</span><span class="cx">             move index, t4
</span><span class="cx">             storep t3, [map, t4, PtrSize]
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorellintLowLevelInterpreter64asm"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm      2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm 2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -93,7 +93,7 @@
</span><span class="cx"> 
</span><span class="cx"> macro cCall2(function)
</span><span class="cx">     checkStackPointerAlignment(t4, 0xbad0c002)
</span><del>-    if X86_64 or ARM64 or ARM64E
</del><ins>+    if X86_64 or ARM64 or ARM64E or RISCV64
</ins><span class="cx">         call function
</span><span class="cx">     elsif X86_64_WIN
</span><span class="cx">         # Note: this implementation is only correct if the return type size is > 8 bytes.
</span><span class="lines">@@ -140,7 +140,7 @@
</span><span class="cx"> # This barely works. arg3 and arg4 should probably be immediates.
</span><span class="cx"> macro cCall4(function)
</span><span class="cx">     checkStackPointerAlignment(t4, 0xbad0c004)
</span><del>-    if X86_64 or ARM64 or ARM64E
</del><ins>+    if X86_64 or ARM64 or ARM64E or RISCV64
</ins><span class="cx">         call function
</span><span class="cx">     elsif X86_64_WIN
</span><span class="cx">         # On Win64, rcx, rdx, r8, and r9 are used for passing the first four parameters.
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreofflineasmbackendsrb"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/offlineasm/backends.rb (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/offlineasm/backends.rb       2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/offlineasm/backends.rb  2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -27,6 +27,7 @@
</span><span class="cx"> require "ast"
</span><span class="cx"> require "x86"
</span><span class="cx"> require "mips"
</span><ins>+require "riscv64"
</ins><span class="cx"> require "cloop"
</span><span class="cx"> 
</span><span class="cx"> begin
</span><span class="lines">@@ -44,6 +45,7 @@
</span><span class="cx">      "ARM64",
</span><span class="cx">      "ARM64E",
</span><span class="cx">      "MIPS",
</span><ins>+     "RISCV64",
</ins><span class="cx">      "C_LOOP",
</span><span class="cx">      "C_LOOP_WIN"
</span><span class="cx">     ]
</span><span class="lines">@@ -63,6 +65,7 @@
</span><span class="cx">      "ARM64",
</span><span class="cx">      "ARM64E",
</span><span class="cx">      "MIPS",
</span><ins>+     "RISCV64",
</ins><span class="cx">      "C_LOOP",
</span><span class="cx">      "C_LOOP_WIN"
</span><span class="cx">     ]
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreofflineasmregistersrb"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/offlineasm/registers.rb (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/offlineasm/registers.rb      2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/JavaScriptCore/offlineasm/registers.rb 2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -57,7 +57,8 @@
</span><span class="cx">      "csr6",
</span><span class="cx">      "csr7",
</span><span class="cx">      "csr8",
</span><del>-     "csr9"
</del><ins>+     "csr9",
+     "csr10"
</ins><span class="cx">     ]
</span><span class="cx"> 
</span><span class="cx"> FPRS =
</span><span class="lines">@@ -80,6 +81,10 @@
</span><span class="cx">      "csfr5",
</span><span class="cx">      "csfr6",
</span><span class="cx">      "csfr7",
</span><ins>+     "csfr8",
+     "csfr9",
+     "csfr10",
+     "csfr11",
</ins><span class="cx">      "fr"
</span><span class="cx">     ]
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreofflineasmriscv64rb"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/offlineasm/riscv64.rb (0 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/offlineasm/riscv64.rb                                (rev 0)
+++ trunk/Source/JavaScriptCore/offlineasm/riscv64.rb   2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -0,0 +1,2057 @@
</span><ins>+# Copyright (C) 2021 Igalia S.L.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+
+# Naming conventions
+#
+# x<number> => GPR, used to operate with 32-bit and 64-bit integer values
+# f<number> => FPR, used to operate with 32-bit and 64-bit floating-point values
+#
+# GPR conventions, to match the baseline JIT:
+#
+# x0  => not used (except where needed for operations) (RISC-V hard-wired zero register)
+# x1  => la (through alias ra) (RISC-V return address register)
+# x2  => sp (through alias sp) (RISC-V stack pointer register)
+# x3  => not used (RISC-V global pointer register)
+# x4  => not used (RISC-V thread pointer register)
+# x5  => not used
+# x6  => ws0
+# x7  => ws1
+# x8  => cfr (thought alias fp) (RISC-V frame pointer register)
+# x9  => csr0
+# x10 => t0, a0, wa0, r0
+# x11 => t1, a1, wa1, r1
+# x12 => t2, a2, wa2
+# x13 => t3, a3, wa3
+# x14 => t4, a4, wa4
+# x15 => t5, a5, wa5
+# x16 => t6, a6, wa6
+# x17 => t7, a7, wa7
+# x18 => csr1
+# x19 => csr2
+# x20 => csr3
+# x21 => csr4
+# x22 => csr5
+# x23 => csr6 (metadataTable)
+# x24 => csr7 (PB)
+# x25 => csr8 (numberTag)
+# x26 => csr9 (notCellMask)
+# x27 => csr10
+# x28 => not used
+# x29 => not used
+# x30 => scratch register
+# x31 => scratch register
+#
+# FPR conventions, to match the baseline JIT:
+#
+# f0  => ft0
+# f1  => ft1
+# f2  => ft2
+# f3  => ft3
+# f4  => ft4
+# f5  => ft5
+# f6  => not used
+# f7  => not used
+# f8  => csfr0
+# f9  => csfr1
+# f10 => fa0, wfa0
+# f11 => fa1, wfa1
+# f12 => fa2, wfa2
+# f13 => fa3, wfa3
+# f14 => fa4, wfa4
+# f15 => fa5, wfa5
+# f16 => fa6, wfa6
+# f17 => fa7, wfa7
+# f18 => csfr2
+# f19 => csfr3
+# f20 => csfr4
+# f21 => csfr5
+# f22 => csfr6
+# f23 => csfr7
+# f24 => csfr8
+# f25 => csfr9
+# f26 => csfr10
+# f27 => csfr11
+# f28 => not used
+# f29 => not used
+# f30 => not used
+# f31 => not used
+
+
+def riscv64OperandTypes(operands)
+    return operands.map {
+        |op|
+        op.class
+    }
+end
+
+def riscv64RaiseMismatchedOperands(operands)
+    raise "Unable to match operands #{riscv64OperandTypes(operands)}"
+end
+
+def riscv64RaiseUnsupported
+    raise "Not supported for RISCV64"
+end
+
+def riscv64LoadInstruction(size)
+    case size
+    when :b
+        "lb"
+    when :bu
+        "lbu"
+    when :h
+        "lh"
+    when :hu
+        "lhu"
+    when :w
+        "lw"
+    when :wu
+        "lwu"
+    when :d
+        "ld"
+    else
+        raise "Unsupported size #{size}"
+    end
+end
+
+def riscv64ZeroExtendedLoadInstruction(size)
+    case size
+    when :b
+        riscv64LoadInstruction(:bu)
+    when :h
+        riscv64LoadInstruction(:hu)
+    when :w
+        riscv64LoadInstruction(:wu)
+    when :d
+        riscv64LoadInstruction(:d)
+    else
+        raise "Unsupported size #{size}"
+    end
+end
+
+def riscv64StoreInstruction(size)
+    case size
+    when :b
+        "sb"
+    when :h
+        "sh"
+    when :w
+        "sw"
+    when :d
+        "sd"
+    else
+        raise "Unsupported size #{size}"
+    end
+end
+
+def riscv64EmitRegisterMask(target, size)
+    case size
+    when :w
+        $asm.puts "li x31, 0xffffffff"
+        $asm.puts "and #{target.riscv64Operand}, #{target.riscv64Operand}, x31"
+    when :d
+    else
+        raise "Unsupported size"
+    end
+end
+
+def riscv64ConditionalBranchInstruction(condition)
+    case condition
+    when :eq
+        "beq"
+    when :neq
+        "bne"
+    when :a
+        "bgtu"
+    when :aeq
+        "bgeu"
+    when :b
+        "bltu"
+    when :beq
+        "bleu"
+    when :gt
+        "bgt"
+    when :gteq
+        "bge"
+    when :lt
+        "blt"
+    when :lteq
+        "ble"
+    else
+        raise "Unsupported condition #{condition}"
+    end
+end
+
+def riscv64EmitLoad(operands, type, mask)
+    instruction = riscv64LoadInstruction(type)
+
+    case riscv64OperandTypes(operands)
+    when [Address, RegisterID]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{instruction} #{operands[1].riscv64Operand}, 0(x31)"
+        else
+            $asm.puts "#{instruction} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+        end
+    when [BaseIndex, RegisterID]
+        operands[0].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "#{instruction} #{operands[1].riscv64Operand}, 0(x31)"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+
+    case mask
+    when :w
+        riscv64EmitRegisterMask(operands[1], :w)
+    when :none
+    else
+        raise "Unsupported mask type"
+    end
+end
+
+def riscv64EmitStore(operands, type)
+    instruction = riscv64StoreInstruction(type)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, Address]
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{instruction} #{operands[0].riscv64Operand}, 0(x31)"
+        else
+            $asm.puts "#{instruction} #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        end
+    when [RegisterID, BaseIndex]
+        operands[1].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "#{instruction} #{operands[0].riscv64Operand}, 0(x31)"
+    when [Immediate, Address]
+        $asm.puts "li x30, #{operands[0].riscv64Operand}"
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{instruction} x30, 0(x31)"
+        else
+            $asm.puts "#{instruction} x30, #{operands[1].riscv64Operand}"
+        end
+    when [Immediate, BaseIndex]
+        operands[1].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "li x30, #{operands[0].riscv64Operand}"
+        $asm.puts "#{instruction} x30, 0(x31)"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitAdditionOperation(operands, size, operation)
+    raise "Unsupported size" unless [:w, :d].include? size
+
+    def additionInstruction(size, operation)
+        case operation
+        when :add
+            size == :w ? "addw" : "add"
+        when :sub
+            size == :w ? "subw" : "sub"
+        else
+            raise "Unsupported arithmetic operation"
+        end
+    end
+
+    instruction = additionInstruction(size, operation)
+    loadInstruction = riscv64LoadInstruction(size)
+    storeInstruction = riscv64StoreInstruction(size)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID]
+        operands = [operands[1], operands[0], operands[1]]
+    when [Immediate, RegisterID]
+        operands = [operands[1], operands[0], operands[1]]
+    when [Address, RegisterID]
+        operands = [operands[1], operands[0], operands[1]]
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, RegisterID]
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[2], size)
+    when [RegisterID, Immediate, RegisterID]
+        operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, x31"
+        riscv64EmitRegisterMask(operands[2], size)
+    when [Immediate, RegisterID, RegisterID]
+        operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, x31, #{operands[1].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[2], size)
+    when [RegisterID, Address, RegisterID]
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x31, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x31, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, x31"
+        riscv64EmitRegisterMask(operands[2], size)
+    when [Immediate, Address]
+        $asm.puts "li x30, #{operands[0].riscv64Operand}"
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x31, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x31, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "#{instruction} x30, x30, x31"
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{storeInstruction} x30, 0(x31)"
+        else
+            $asm.puts "#{storeInstruction} x30, #{operands[1].riscv64Operand}"
+        end
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitMulDivArithmetic(operands, size, operation)
+    raise "Unsupported size" unless [:w, :d].include? size
+
+    def arithmeticInstruction(size, operation)
+        case operation
+        when :mul
+            size == :w ? "mulw" : "mul"
+        when :div
+            size == :w ? "divuw" : "divu"
+        else
+            raise "Unsupported arithmetic operation"
+        end
+    end
+
+    instruction = arithmeticInstruction(size, operation)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID]
+        operands = [operands[0], operands[1], operands[1]]
+    when [Immediate, RegisterID]
+        operands = [operands[1], operands[0], operands[1]]
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, RegisterID]
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+    when [RegisterID, Immediate, RegisterID]
+        $asm.puts "li x31, #{operands[1].riscv64Operand}"
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, x31"
+    when [Immediate, RegisterID, RegisterID]
+        $asm.puts "li x31, #{operands[0].riscv64Operand}"
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, x31, #{operands[1].riscv64Operand}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitConditionalBranch(operands, size, condition)
+    instruction = riscv64ConditionalBranchInstruction(condition)
+
+    def signExtendForSize(register, target, size)
+        case size
+        when :b
+            $asm.puts "slli #{target}, #{register.riscv64Operand}, 24"
+            $asm.puts "sext.w #{target}, #{target}"
+            $asm.puts "srai #{target}, #{target}, 24"
+        when :w
+            $asm.puts "sext.w #{target}, #{register.riscv64Operand}"
+        when :d
+            $asm.puts "mv #{target}, #{register.riscv64Operand}"
+        end
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, LocalLabelReference]
+        signExtendForSize(operands[0], 'x30', size)
+        signExtendForSize(operands[1], 'x31', size)
+        $asm.puts "#{instruction} x30, x31, #{operands[2].asmLabel}"
+    when [RegisterID, Immediate, LocalLabelReference]
+        signExtendForSize(operands[0], 'x30', size)
+        $asm.puts "li x31, #{operands[1].riscv64Operand}"
+        $asm.puts "#{instruction} x30, x31, #{operands[2].asmLabel}"
+    when [RegisterID, Address, LocalLabelReference]
+        signExtendForSize(operands[0], 'x30', size)
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{riscv64LoadInstruction(size)} x31, 0(x31)"
+        else
+            $asm.puts "#{riscv64LoadInstruction(size)} x31, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "#{instruction} x30, x31, #{operands[2].asmLabel}"
+    when [RegisterID, BaseIndex, LocalLabelReference]
+        operands[1].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "#{riscv64LoadInstruction(size)} x31, 0(x31)"
+        signExtendForSize(operands[0], 'x30', size)
+        $asm.puts "#{instruction} x30, x31, #{operands[2].asmLabel}"
+    when [Address, RegisterID, LocalLabelReference]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{riscv64LoadInstruction(size)} x30, 0(x31)"
+        else
+            $asm.puts "#{riscv64LoadInstruction(size)} x30, #{operands[0].riscv64Operand}"
+        end
+        signExtendForSize(operands[1], 'x31', size)
+        $asm.puts "#{instruction} x30, x31, #{operands[2].asmLabel}"
+    when [Address, Immediate, LocalLabelReference]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{riscv64LoadInstruction(size)} x30, 0(x31)"
+        else
+            $asm.puts "#{riscv64LoadInstruction(size)} x30, #{operands[0].riscv64Operand}"
+        end
+        $asm.puts "li x31, #{operands[1].riscv64Operand}"
+        $asm.puts "#{instruction} x30, x31, #{operands[2].asmLabel}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitConditionalBranchForTest(operands, size, test)
+    def branchInstruction(test)
+        case test
+        when :z
+            "beqz"
+        when :nz
+            "bnez"
+        when :s
+            "bltz"
+        else
+        end
+    end
+
+    def signExtendForSize(target, size)
+        case size
+        when :b
+            $asm.puts "slli #{target}, #{target}, 24"
+            $asm.puts "sext.w #{target}, #{target}"
+            $asm.puts "srai #{target}, #{target}, 24"
+        when :h
+            $asm.puts "slli #{target}, #{target}, 16"
+            $asm.puts "sext.w #{target}, #{target}"
+            $asm.puts "srai #{target}, #{target}, 16"
+        when :w
+            $asm.puts "sext.w #{target}, #{target}"
+        when :d
+            return
+        end
+    end
+
+    bInstruction = branchInstruction(test)
+    loadInstruction = riscv64LoadInstruction(size)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, LocalLabelReference]
+        case test
+        when :s
+            $asm.puts "mv x31, #{operands[0].riscv64Operand}"
+            signExtendForSize('x31', size)
+            $asm.puts "#{bInstruction} x31, #{operands[1].asmLabel}"
+        else
+            $asm.puts "#{bInstruction} #{operands[0].riscv64Operand}, #{operands[1].asmLabel}"
+        end
+    when [RegisterID, RegisterID, LocalLabelReference]
+        $asm.puts "and x31, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        signExtendForSize('x31', size)
+        $asm.puts "#{bInstruction} x31, #{operands[2].asmLabel}"
+    when [RegisterID, Immediate, LocalLabelReference]
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "and x31, #{operands[0].riscv64Operand}, x31"
+        else
+            $asm.puts "andi x31, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        end
+        signExtendForSize('x31', size)
+        $asm.puts "#{bInstruction} x31, #{operands[2].asmLabel}"
+    when [Address, LocalLabelReference]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x31, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x31, #{operands[0].riscv64Operand}"
+        end
+        $asm.puts "#{bInstruction} x31, #{operands[1].asmLabel}"
+    when [Address, Immediate, LocalLabelReference]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x30, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x30, #{operands[0].riscv64Operand}"
+        end
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "and x31, x30, x31"
+        else
+            $asm.puts "andi x31, x30, #{operands[1].riscv64Operand}"
+        end
+        signExtendForSize('x31', size)
+        $asm.puts "#{bInstruction} x31, #{operands[2].asmLabel}"
+    when [BaseIndex, LocalLabelReference]
+        operands[0].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "#{loadInstruction} x31, 0(x31)"
+        $asm.puts "#{bInstruction} x31, #{operands[1].asmLabel}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitConditionalBranchForAdditionOperation(operands, size, operation, test)
+    def additionInstruction(size, operation)
+        case operation
+        when :add
+            size == :w ? "addw" : "add"
+        when :sub
+            size == :w ? "subw" : "sub"
+        else
+            raise "Unsupported arithmetic operation"
+        end
+    end
+
+    def emitBranchForTest(test, target, label)
+        case test
+        when :z
+            $asm.puts "beqz #{target.riscv64Operand}, #{label.asmLabel}"
+        when :nz
+            $asm.puts "bnez #{target.riscv64Operand}, #{label.asmLabel}"
+        when :s
+            $asm.puts "bltz #{target.riscv64Operand}, #{label.asmLabel}"
+        else
+            raise "Unsupported test"
+        end
+    end
+
+    instruction = additionInstruction(size, operation)
+    loadInstruction = riscv64LoadInstruction(size)
+    storeInstruction = riscv64StoreInstruction(size)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, LocalLabelReference]
+        operands = [operands[1], operands[0], operands[1], operands[2]]
+    when [Immediate, RegisterID, LocalLabelReference]
+        operands = [operands[1], operands[0], operands[1], operands[2]]
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, RegisterID, LocalLabelReference]
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        $asm.puts "mv x30, #{operands[2].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[2], size)
+        emitBranchForTest(test, RISCV64ScratchRegister.x30, operands[3])
+    when [RegisterID, Immediate, RegisterID, LocalLabelReference]
+        $asm.puts "li x31, #{operands[1].riscv64Operand}"
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, x31"
+        $asm.puts "mv x30, #{operands[2].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[2], size)
+        emitBranchForTest(test, RISCV64ScratchRegister.x30, operands[3])
+    when [Immediate, Address, LocalLabelReference]
+        $asm.puts "li x30, #{operands[0].riscv64Operand}"
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x31, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x31, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "#{instruction} x30, x30, x31"
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{storeInstruction} x30, 0(x31)"
+        else
+            $asm.puts "#{storeInstruction} x30, #{operands[1].riscv64Operand}"
+        end
+        emitBranchForTest(test, RISCV64ScratchRegister.x30, operands[2])
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitConditionalBranchForMultiplicationOperation(operands, size, test)
+    raise "Unsupported size" unless size == :w
+
+    def emitMultiplication(lhs, rhs)
+        $asm.puts "sext.w x30, #{lhs.riscv64Operand}"
+        $asm.puts "sext.w x31, #{rhs.riscv64Operand}"
+        $asm.puts "mul x30, x30, x31"
+    end
+
+    def emitBranchForTest(test, label)
+        case test
+        when :z
+            $asm.puts "beqz x30, #{label.asmLabel}"
+        when :nz
+            $asm.puts "bnez x30, #{label.asmLabel}"
+        when :s
+            $asm.puts "bltz x30, #{label.asmLabel}"
+        else
+            raise "Unsupported test"
+        end
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, LocalLabelReference]
+        emitMultiplication(operands[0], operands[1])
+        $asm.puts "mv #{operands[1].riscv64Operand}, x30"
+        riscv64EmitRegisterMask(operands[1], size)
+
+        emitBranchForTest(test, operands[2])
+    when [Immediate, RegisterID, LocalLabelReference]
+        $asm.puts "li x30, #{operands[0].riscv64Operand}"
+        emitMultiplication(RISCV64ScratchRegister.x30, operands[1])
+        $asm.puts "mv #{operands[1].riscv64Operand}, x30"
+        riscv64EmitRegisterMask(operands[1], size)
+
+        emitBranchForTest(test, operands[2])
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitOverflowBranchForOperation(operands, size, operation)
+    raise "Unsupported size" unless size == :w
+
+    def operationInstruction(operation)
+        case operation
+        when :add
+            "add"
+        when :sub
+            "sub"
+        when :mul
+            "mul"
+        else
+            raise "Unsupported operation"
+        end
+    end
+
+    instruction = operationInstruction(operation)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, LocalLabelReference]
+        operands = [operands[1], operands[0], operands[1], operands[2]]
+    when [Immediate, RegisterID, LocalLabelReference]
+        operands = [operands[1], operands[0], operands[1], operands[2]]
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, RegisterID, LocalLabelReference]
+        $asm.puts "sext.w x30, #{operands[0].riscv64Operand}"
+        $asm.puts "sext.w x31, #{operands[1].riscv64Operand}"
+        $asm.puts "#{instruction} x30, x30, x31"
+
+        $asm.puts "mv #{operands[2].riscv64Operand}, x30"
+        riscv64EmitRegisterMask(operands[2], size)
+
+        $asm.puts "sext.w x31, x30"
+        $asm.puts "bne x30, x31, #{operands[3].asmLabel}"
+    when [RegisterID, Immediate, RegisterID, LocalLabelReference]
+        $asm.puts "sext.w x30, #{operands[0].riscv64Operand}"
+        $asm.puts "li x31, #{operands[1].riscv64Operand}"
+        $asm.puts "sext.w x31, x31"
+        $asm.puts "#{instruction} x30, x30, x31"
+
+        $asm.puts "mv #{operands[2].riscv64Operand}, x30"
+        riscv64EmitRegisterMask(operands[2], size)
+
+        $asm.puts "sext.w x31, x30"
+        $asm.puts "bne x30, x31, #{operands[3].asmLabel}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitCompare(operands, size, condition)
+    def signExtendRegisterForSize(register, target, size)
+        case size
+        when :b
+            $asm.puts "slli #{target}, #{register.riscv64Operand}, 24"
+            $asm.puts "sext.w #{target}, #{target}"
+            $asm.puts "srai #{target}, #{target}, 24"
+        when :w
+            $asm.puts "sext.w #{target}, #{register.riscv64Operand}"
+        when :d
+            $asm.puts "mv #{target}, #{register.riscv64Operand}"
+        else
+            raise "Unsupported size"
+        end
+    end
+
+    def signExtendImmediateForSize(immediate, target, size)
+        $asm.puts "li #{target}, #{immediate.riscv64Operand}"
+        case size
+        when :b
+            $asm.puts "slli #{target}, #{target}, 24"
+            $asm.puts "sext.w #{target}, #{target}"
+            $asm.puts "srai #{target}, #{target}, 24"
+        when :w
+            $asm.puts "sext.w #{target}, #{target}"
+        when :d
+        else
+            raise "Unsupported size"
+        end
+    end
+
+    def loadAndSignExtendAddressForSize(address, target, size)
+        if address.riscv64RequiresLoad
+            address.riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{riscv64LoadInstruction(size)} #{target}, 0(x31)"
+        else
+            $asm.puts "#{riscv64LoadInstruction(size)} #{target}, #{address.riscv64Operand}"
+        end
+    end
+
+    def setForCondition(lhs, rhs, target, condition)
+        case condition
+        when :eq
+            $asm.puts "sub x31, #{lhs}, #{rhs}"
+            $asm.puts "seqz #{operands[2].riscv64Operand}, x31"
+        when :neq
+            $asm.puts "sub x31, #{lhs}, #{rhs}"
+            $asm.puts "snez #{operands[2].riscv64Operand}, x31"
+        when :a
+            $asm.puts "sltu #{operands[2].riscv64Operand}, #{rhs}, #{lhs}"
+        when :aeq
+            $asm.puts "sltu #{operands[2].riscv64Operand}, #{lhs}, #{rhs}"
+            $asm.puts "xori #{operands[2].riscv64Operand}, #{operands[2].riscv64Operand}, 1"
+        when :b
+            $asm.puts "sltu #{operands[2].riscv64Operand}, #{lhs}, #{rhs}"
+        when :beq
+            $asm.puts "sltu #{operands[2].riscv64Operand}, #{rhs}, #{lhs}"
+            $asm.puts "xori #{operands[2].riscv64Operand}, #{operands[2].riscv64Operand}, 1"
+        when :lt
+            $asm.puts "slt #{target.riscv64Operand}, #{lhs}, #{rhs}"
+        when :lteq
+            $asm.puts "slt #{target.riscv64Operand}, #{rhs}, #{lhs}"
+            $asm.puts "xori #{target.riscv64Operand}, #{target.riscv64Operand}, 1"
+        when :gt
+            $asm.puts "slt #{target.riscv64Operand}, #{rhs}, #{lhs}"
+        when :gteq
+            $asm.puts "slt #{target.riscv64Operand}, #{lhs}, #{rhs}"
+            $asm.puts "xori #{target.riscv64Operand}, #{target.riscv64Operand}, 1"
+        else
+            raise "Unsupported condition"
+        end
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, RegisterID]
+        signExtendRegisterForSize(operands[0], 'x30', size)
+        signExtendRegisterForSize(operands[1], 'x31', size)
+        setForCondition('x30', 'x31', operands[2], condition)
+    when [RegisterID, Immediate, RegisterID]
+        signExtendRegisterForSize(operands[0], 'x30', size)
+        signExtendImmediateForSize(operands[1], 'x31', size)
+        setForCondition('x30', 'x31', operands[2], condition)
+    when [Address, RegisterID, RegisterID]
+        loadAndSignExtendAddressForSize(operands[0], 'x30', size)
+        signExtendRegisterForSize(operands[1], 'x31', size)
+        setForCondition('x30', 'x31', operands[2], condition)
+    when [Address, Immediate, RegisterID]
+        loadAndSignExtendAddressForSize(operands[0], 'x30', size)
+        signExtendImmediateForSize(operands[1], 'x31', size)
+        setForCondition('x30', 'x31', operands[2], condition)
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitTest(operands, size, test)
+    def testInstruction(test)
+        case test
+        when :z
+            "seqz"
+        when :nz
+            "snez"
+        else
+            raise "Unknown test type"
+        end
+    end
+
+    instruction = testInstruction(test)
+    loadInstruction = riscv64ZeroExtendedLoadInstruction(size)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID, RegisterID]
+        $asm.puts "and x31, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, x31"
+    when [RegisterID, Immediate, RegisterID]
+        if operands[1].riscv64RequiresLoad
+            $asm.puts "li x31, #{operands[1].riscv64Operand}"
+            $asm.puts "and x31, #{operands[0].riscv64Operand}, x31"
+        else
+            $asm.puts "andi x31, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, x31"
+    when [Address, Immediate, RegisterID]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x31, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x31, #{operands[0].riscv64Operand}"
+        end
+        if operands[1].riscv64RequiresLoad
+            $asm.puts "li x30, #{operands[1].riscv64Operand}"
+            $asm.puts "and x31, x30, x31"
+        else
+            $asm.puts "andi x31, x31, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, x31"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitLogicalOperation(operands, size, operation)
+    def opInstruction(operation)
+        case operation
+        when :and
+            "and"
+        when :or
+            "or"
+        when :xor
+            "xor"
+        else
+            raise "Unsupported logical operation"
+        end
+    end
+
+    instruction = opInstruction(operation)
+    loadInstruction = riscv64ZeroExtendedLoadInstruction(size)
+    storeInstruction = riscv64StoreInstruction(size)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID]
+        $asm.puts "#{instruction} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[1], size)
+    when [RegisterID, RegisterID, RegisterID]
+        $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[2], size)
+    when [RegisterID, Immediate, RegisterID]
+        if operands[1].riscv64RequiresLoad
+            $asm.puts "li x31, #{operands[1].riscv64Operand}"
+            $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, x31"
+        else
+            $asm.puts "#{instruction}i #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        end
+        riscv64EmitRegisterMask(operands[2], size)
+    when [Immediate, RegisterID]
+        if operands[0].riscv64RequiresLoad
+            $asm.puts "li x31, #{operands[0].riscv64Operand}"
+            $asm.puts "#{instruction} #{operands[1].riscv64Operand}, x31, #{operands[1].riscv64Operand}"
+        else
+            $asm.puts "#{instruction}i #{operands[1].riscv64Operand}, #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+        end
+        riscv64EmitRegisterMask(operands[1], size)
+    when [Immediate, Address]
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} x30, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} x30, #{operands[1].riscv64Operand}"
+        end
+        $asm.puts "li x31, #{operands[0].riscv64Operand}"
+        $asm.puts "#{instruction} x30, x31, x30"
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{storeInstruction} x30, 0(x31)"
+        else
+            $asm.puts "#{storeInstruction} x30, #{operands[1].riscv64Operand}"
+        end
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitComplementOperation(operands, size, operation)
+    def complementInstruction(size, operation)
+        case operation
+        when :not
+            "not"
+        when :neg
+            size == :w ? "negw" : "neg"
+        else
+            raise "Unsupported complement operation"
+        end
+    end
+
+    instruction = complementInstruction(size, operation)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID]
+        $asm.puts "#{instruction} #{operands[0].riscv64Operand}, #{operands[0].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[0], size)
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitShift(operands, size, shift)
+    raise "Unsupported size" unless [:w, :d].include? size
+
+    def shiftInstruction(size, shift)
+        case shift
+        when :lleft
+            size == :w ? "sllw" : "sll"
+        when :lright
+            size == :w ? "srlw" : "srl"
+        when :aright
+            size == :w ? "sraw" : "sra"
+        else
+            raise "Unsupported shift type"
+        end
+    end
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, RegisterID]
+        $asm.puts "#{shiftInstruction(size, shift)} #{operands[1].riscv64Operand}, #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+        riscv64EmitRegisterMask(operands[1], size)
+    when [Immediate, RegisterID]
+        $asm.puts "li x31, #{operands[0].riscv64Operand}"
+        $asm.puts "#{shiftInstruction(size, shift)} #{operands[1].riscv64Operand}, #{operands[1].riscv64Operand}, x31"
+        riscv64EmitRegisterMask(operands[1], size)
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitBitExtension(operands, fromSize, toSize, extensionType)
+    raise "Unsupported operand types" unless riscv64OperandTypes(operands) == [RegisterID, RegisterID]
+
+    def emitShifts(operands, shiftCount)
+        $asm.puts "slli #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}, #{shiftCount}"
+        $asm.puts "sext.w #{operands[1].riscv64Operand}, #{operands[1].riscv64Operand}"
+        $asm.puts "srai #{operands[1].riscv64Operand}, #{operands[1].riscv64Operand}, #{shiftCount}"
+    end
+
+    case [fromSize, toSize, extensionType]
+    when [:b, :w, :sign], [:b, :d, :sign]
+        emitShifts(operands, 24)
+        riscv64EmitRegisterMask(operands[1], toSize)
+    when [:h, :w, :sign], [:h, :d, :sign]
+        emitShifts(operands, 16)
+        riscv64EmitRegisterMask(operands[1], toSize)
+    when [:w, :d, :sign]
+        $asm.puts "sext.w #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+    when [:w, :d, :zero]
+        $asm.puts "slli #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}, 32"
+        $asm.puts "srli #{operands[1].riscv64Operand}, #{operands[1].riscv64Operand}, 32"
+    else
+        raise "Unsupported bit-extension operation"
+    end
+end
+
+def riscv64EmitFPLoad(operands, loadInstruction)
+    case riscv64OperandTypes(operands)
+    when [Address, FPRegisterID]
+        if operands[0].riscv64RequiresLoad
+            operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{loadInstruction} #{operands[1].riscv64Operand}, 0(x31)"
+        else
+            $asm.puts "#{loadInstruction} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+        end
+    when [BaseIndex, FPRegisterID]
+        operands[0].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "#{loadInstruction} #{operands[1].riscv64Operand}, 0(x31)"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPStore(operands, storeInstruction)
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, Address]
+        if operands[1].riscv64RequiresLoad
+            operands[1].riscv64Load(RISCV64ScratchRegister.x31)
+            $asm.puts "#{storeInstruction} #{operands[0].riscv64Operand}, 0(x31)"
+        else
+            $asm.puts "#{storeInstruction} #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        end
+    when [FPRegisterID, BaseIndex]
+        operands[1].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+        $asm.puts "#{storeInstruction} #{operands[0].riscv64Operand}, 0(x31)"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPOperation(operands, operation)
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, FPRegisterID, FPRegisterID]
+        $asm.puts "#{operation} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+    when [FPRegisterID, FPRegisterID]
+        $asm.puts "#{operation} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPCompare(operands, precision, condition)
+    def suffixForPrecision(precision)
+        case precision
+        when :s
+            "s"
+        when :d
+            "d"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    def instructionForCondition(condition, precision)
+        suffix = suffixForPrecision(precision)
+        case condition
+        when :eq, :neq
+            "feq.#{suffix}"
+        when :lt, :gt
+            "flt.#{suffix}"
+        when :lteq, :gteq
+            "fle.#{suffix}"
+        else
+            raise "Unsupported condition"
+        end
+    end
+
+    def setForCondition(operands, precision, condition)
+        instruction = instructionForCondition(condition, precision)
+        case condition
+        when :eq
+            $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        when :neq
+            $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+            $asm.puts "xori #{operands[2].riscv64Operand}, #{operands[2].riscv64Operand}, 1"
+        when :lt, :lteq
+            $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[0].riscv64Operand}, #{operands[1].riscv64Operand}"
+        when :gt, :gteq
+            $asm.puts "#{instruction} #{operands[2].riscv64Operand}, #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+        else
+            raise "Unsupported condition"
+        end
+    end
+
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, FPRegisterID, RegisterID]
+        setForCondition(operands, precision, condition)
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPBitwiseOperation(operands, precision, operation)
+    def suffixForPrecision(precision)
+        case precision
+        when :s
+            "w"
+        when :d
+            "d"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    suffix = suffixForPrecision(precision)
+
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, FPRegisterID]
+        $asm.puts "fmv.x.#{suffix} x30, #{operands[0].riscv64Operand}"
+        $asm.puts "fmv.x.#{suffix} x31, #{operands[1].riscv64Operand}"
+        $asm.puts "#{operation} x31, x30, x31"
+        $asm.puts "fmv.#{suffix}.x #{operands[1].riscv64Operand}, x31"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPCopy(operands, precision)
+    def suffixForPrecision(precision)
+        case precision
+        when :s
+            "w"
+        when :d
+            "d"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    suffix = suffixForPrecision(precision)
+
+    case riscv64OperandTypes(operands)
+    when [RegisterID, FPRegisterID]
+        $asm.puts "fmv.#{suffix}.x #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+    when [FPRegisterID, RegisterID]
+        $asm.puts "fmv.x.#{suffix} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPConditionalBranchForTest(operands, precision, test)
+    def suffixForPrecision(precision)
+        case precision
+        when :s
+            "s"
+        when :d
+            "d"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    def emitBranchForUnordered(lhs, rhs, label, precision)
+        suffix = suffixForPrecision(precision)
+
+        $asm.puts "fclass.d x30, #{lhs.riscv64Operand}"
+        $asm.puts "fclass.d x31, #{rhs.riscv64Operand}"
+        $asm.puts "or x31, x30, x31"
+        $asm.puts "li x30, 0x300"
+        $asm.puts "and x31, x31, x30"
+        $asm.puts "bnez x31, #{label.asmLabel}"
+    end
+
+    def emitBranchForTest(test, lhs, rhs, branch, label, precision)
+        suffix = suffixForPrecision(precision)
+
+        $asm.puts "#{test}.#{suffix} x31, #{lhs.riscv64Operand}, #{rhs.riscv64Operand}"
+        $asm.puts "#{branch} x31, #{label.asmLabel}"
+    end
+
+    suffix = suffixForPrecision(precision)
+
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, FPRegisterID, LocalLabelReference]
+        case test
+        when :eq
+            emitBranchForTest("feq", operands[0], operands[1], "bnez", operands[2], precision)
+        when :neq
+            emitBranchForTest("feq", operands[0], operands[1], "beqz", operands[2], precision)
+        when :lt
+            emitBranchForTest("flt", operands[0], operands[1], "bnez", operands[2], precision)
+        when :lteq
+            emitBranchForTest("fle", operands[0], operands[1], "bnez", operands[2], precision)
+        when :gt
+            emitBranchForTest("flt", operands[1], operands[0], "bnez", operands[2], precision)
+        when :gteq
+            emitBranchForTest("fle", operands[1], operands[0], "bnez", operands[2], precision)
+        when :equn
+            emitBranchForUnordered(operands[0], operands[1], operands[2], precision)
+            emitBranchForTest("feq", operands[0], operands[1], "bnez", operands[2], precision)
+        when :nequn
+            emitBranchForUnordered(operands[0], operands[1], operands[2], precision)
+            emitBranchForTest("feq", operands[0], operands[1], "beqz", operands[2], precision)
+        when :ltun
+            emitBranchForUnordered(operands[0], operands[1], operands[2], precision)
+            emitBranchForTest("flt", operands[0], operands[1], "bnez", operands[2], precision)
+        when :ltequn
+            emitBranchForUnordered(operands[0], operands[1], operands[2], precision)
+            emitBranchForTest("fle", operands[0], operands[1], "bnez", operands[2], precision)
+        when :gtun
+            emitBranchForUnordered(operands[0], operands[1], operands[2], precision)
+            emitBranchForTest("flt", operands[1], operands[0], "bnez", operands[2], precision)
+        when :gtequn
+            emitBranchForUnordered(operands[0], operands[1], operands[2], precision)
+            emitBranchForTest("fle", operands[1], operands[0], "bnez", operands[2], precision)
+        else
+            raise "Unsupported test"
+        end
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPRoundOperation(operands, precision, roundingMode)
+    def intSuffixForPrecision(precision)
+        case precision
+        when :s
+            "w"
+        when :d
+            "l"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    def fpSuffixForPrecision(precision)
+        case precision
+        when :s
+            "s"
+        when :d
+            "d"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    intSuffix = intSuffixForPrecision(precision)
+    fpSuffix = fpSuffixForPrecision(precision)
+
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, FPRegisterID]
+        $asm.puts "fcvt.#{intSuffix}.#{fpSuffix} x31, #{operands[0].riscv64Operand}, #{roundingMode}"
+        $asm.puts "fcvt.#{fpSuffix}.#{intSuffix} #{operands[1].riscv64Operand}, x31, #{roundingMode}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+def riscv64EmitFPConvertOperation(operands, fromType, toType, roundingMode)
+    def intSuffixForType(type)
+        case type
+        when :w
+            "w"
+        when :wu
+            "wu"
+        when :l
+            "l"
+        when :lu
+            "lu"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    def fpSuffixForType(type)
+        case type
+        when :s
+            "s"
+        when :d
+            "d"
+        else
+            raise "Unsupported precision"
+        end
+    end
+
+    case riscv64OperandTypes(operands)
+    when [FPRegisterID, RegisterID]
+        fpSuffix = fpSuffixForType(fromType)
+        intSuffix = intSuffixForType(toType)
+
+        $asm.puts "fcvt.#{intSuffix}.#{fpSuffix} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}, #{roundingMode}"
+    when [RegisterID, FPRegisterID]
+        raise "Unsupported rounding mode" unless roundingMode == :none
+        intSuffix = intSuffixForType(fromType)
+        fpSuffix = fpSuffixForType(toType)
+
+        $asm.puts "fcvt.#{fpSuffix}.#{intSuffix} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+    when [FPRegisterID, FPRegisterID]
+        raise "Unsupported rounding mode" unless roundingMode == :none
+        fpFromSuffix = fpSuffixForType(fromType)
+        fpToSuffix = fpSuffixForType(toType)
+
+        $asm.puts "fcvt.#{fpToSuffix}.#{fpFromSuffix} #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+    else
+        riscv64RaiseMismatchedOperands(operands)
+    end
+end
+
+class RegisterID
+    def riscv64Operand
+        case @name
+        when 't0', 'a0', 'wa0', 'r0'
+            'x10'
+        when 't1', 'a1', 'wa1', 'r1'
+            'x11'
+        when 't2', 'a2', 'wa2'
+            'x12'
+        when 't3', 'a3', 'wa3'
+            'x13'
+        when 't4', 'a4', 'wa4'
+            'x14'
+        when 't5', 'a5', 'wa5'
+            'x15'
+        when 't6', 'a6', 'wa6'
+            'x16'
+        when 't7', 'a7', 'wa7'
+            'x17'
+        when 'ws0'
+            'x6'
+        when 'ws1'
+            'x7'
+        when 'csr0'
+            'x9'
+        when 'csr1'
+            'x18'
+        when 'csr2'
+            'x19'
+        when 'csr3'
+            'x20'
+        when 'csr4'
+            'x21'
+        when 'csr5'
+            'x22'
+        when 'csr6'
+            'x23'
+        when 'csr7'
+            'x24'
+        when 'csr8'
+            'x25'
+        when 'csr9'
+            'x26'
+        when 'csr10'
+            'x27'
+        when 'lr'
+            'ra'
+        when 'sp'
+            'sp'
+        when 'cfr'
+            'fp'
+        else
+            raise "Bad register name #{@name} at #{codeOriginString}"
+        end
+    end
+end
+
+class FPRegisterID
+    def riscv64Operand
+        case @name
+        when 'ft0'
+            'f0'
+        when 'ft1'
+            'f1'
+        when 'ft2'
+            'f2'
+        when 'ft3'
+            'f3'
+        when 'ft4'
+            'f4'
+        when 'ft5'
+            'f5'
+        when 'csfr0'
+            'f8'
+        when 'csfr1'
+            'f9'
+        when 'fa0', 'wfa0'
+            'f10'
+        when 'fa1', 'wfa1'
+            'f11'
+        when 'fa2', 'wfa2'
+            'f12'
+        when 'fa3', 'wfa3'
+            'f13'
+        when 'fa4', 'wfa4'
+            'f14'
+        when 'fa5', 'wfa5'
+            'f15'
+        when 'fa6', 'wfa6'
+            'f16'
+        when 'fa7', 'wfa7'
+            'f17'
+        when 'csfr2'
+            'f18'
+        when 'csfr3'
+            'f19'
+        when 'csfr4'
+            'f20'
+        when 'csfr5'
+            'f21'
+        when 'csfr6'
+            'f22'
+        when 'csfr7'
+            'f23'
+        when 'csfr8'
+            'f24'
+        when 'csfr9'
+            'f25'
+        when 'csfr10'
+            'f26'
+        when 'csfr11'
+            'f27'
+        else
+            raise "Bad register name #{@name} at #{codeOriginString}"
+        end
+    end
+end
+
+class RISCV64ScratchRegister
+    def initialize(name)
+        @name = name
+    end
+
+    def riscv64Operand
+        case @name
+        when :x30
+            'x30'
+        when :x31
+            'x31'
+        else
+            raise "Unsupported scratch register"
+        end
+    end
+
+    def self.x30
+        RISCV64ScratchRegister.new(:x30)
+    end
+
+    def self.x31
+        RISCV64ScratchRegister.new(:x31)
+    end
+end
+
+class Immediate
+    def riscv64Operand
+        "#{value}"
+    end
+
+    def riscv64RequiresLoad
+        value > 0x7ff or value < -0x800
+    end
+
+    def riscv64Load(target)
+        $asm.puts "li #{target.riscv64Operand}, #{value}"
+    end
+end
+
+class Address
+    def riscv64Operand
+        raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value > 0x7ff or offset.value < -0x800
+        "#{offset.value}(#{base.riscv64Operand})"
+    end
+
+    def riscv64RequiresLoad
+        offset.value > 0x7ff or offset.value < -0x800
+    end
+
+    def riscv64Load(target)
+        $asm.puts "li #{target.riscv64Operand}, #{offset.value}"
+        $asm.puts "add #{target.riscv64Operand}, #{base.riscv64Operand}, #{target.riscv64Operand}"
+    end
+end
+
+class BaseIndex
+    def riscv64Load(target, scratch)
+        case riscv64OperandTypes([base, index])
+        when [RegisterID, RegisterID]
+            $asm.puts "slli #{target.riscv64Operand}, #{index.riscv64Operand}, #{scaleShift}"
+            $asm.puts "add #{target.riscv64Operand}, #{base.riscv64Operand}, #{target.riscv64Operand}"
+            if offset.value != 0
+                $asm.puts "li #{scratch.riscv64Operand}, #{offset.value}"
+                $asm.puts "add #{target.riscv64Operand}, #{target.riscv64Operand}, #{scratch.riscv64Operand}"
+            end
+        else
+            riscv64RaiseMismatchedOperands([base, index])
+        end
+    end
+end
+
+class Instruction
+    def lowerRISCV64
+        case opcode
+        when "addi"
+            riscv64EmitAdditionOperation(operands, :w, :add)
+        when "addp", "addq"
+            riscv64EmitAdditionOperation(operands, :d, :add)
+        when "addis", "addps"
+            riscv64RaiseUnsupported
+        when "subi"
+            riscv64EmitAdditionOperation(operands, :w, :sub)
+        when "subp", "subq"
+            riscv64EmitAdditionOperation(operands, :d, :sub)
+        when "subis"
+            riscv64RaiseUnsupported
+        when "andi"
+            riscv64EmitLogicalOperation(operands, :w, :and)
+        when "andp", "andq"
+            riscv64EmitLogicalOperation(operands, :d, :and)
+        when "orh"
+            riscv64EmitLogicalOperation(operands, :h, :or)
+        when "ori"
+            riscv64EmitLogicalOperation(operands, :w, :or)
+        when "orp", "orq"
+            riscv64EmitLogicalOperation(operands, :d, :or)
+        when "xori"
+            riscv64EmitLogicalOperation(operands, :w, :xor)
+        when "xorp", "xorq"
+            riscv64EmitLogicalOperation(operands, :d, :xor)
+        when "lshifti"
+            riscv64EmitShift(operands, :w, :lleft)
+        when "lshiftp", "lshiftq"
+            riscv64EmitShift(operands, :d, :lleft)
+        when "rshifti"
+            riscv64EmitShift(operands, :w, :aright)
+        when "rshiftp", "rshiftq"
+            riscv64EmitShift(operands, :d, :aright)
+        when "urshifti"
+            riscv64EmitShift(operands, :w, :lright)
+        when "urshiftp", "urshiftq"
+            riscv64EmitShift(operands, :d, :lright)
+        when "muli"
+            riscv64EmitMulDivArithmetic(operands, :w, :mul)
+        when "mulp", "mulq"
+            riscv64EmitMulDivArithmetic(operands, :d, :mul)
+        when "divi"
+            riscv64EmitMulDivArithmetic(operands, :w, :div)
+        when "divq"
+            riscv64EmitMulDivArithmetic(operands, :d, :div)
+        when "divis", "divqs"
+            riscv64RaiseUnsupported
+        when "negi"
+            riscv64EmitComplementOperation(operands, :w, :neg)
+        when "negp", "negq"
+            riscv64EmitComplementOperation(operands, :d, :neg)
+        when "noti"
+            riscv64EmitComplementOperation(operands, :w, :not)
+        when "notq"
+            riscv64EmitComplementOperation(operands, :d, :not)
+        when "storeb"
+            riscv64EmitStore(operands, :b)
+        when "storeh"
+            riscv64EmitStore(operands, :h)
+        when "storei"
+            riscv64EmitStore(operands, :w)
+        when "storep", "storeq"
+            riscv64EmitStore(operands, :d)
+        when "loadb"
+            riscv64EmitLoad(operands, :bu, :none)
+        when "loadh"
+            riscv64EmitLoad(operands, :hu, :none)
+        when "loadi"
+            riscv64EmitLoad(operands, :wu, :none)
+        when "loadis"
+            riscv64EmitLoad(operands, :w, :none)
+        when "loadp", "loadq"
+            riscv64EmitLoad(operands, :d, :none)
+        when "loadbsi"
+            riscv64EmitLoad(operands, :b, :w)
+        when "loadbsq"
+            riscv64EmitLoad(operands, :b, :none)
+        when "loadhsi"
+            riscv64EmitLoad(operands, :h, :w)
+        when "loadhsq"
+            riscv64EmitLoad(operands, :h, :none)
+        when "bfeq"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :eq)
+        when "bflt"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :lt)
+        when "bfgt"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :gt)
+        when "bfltun"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :ltun)
+        when "bfltequn"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :ltequn)
+        when "bfgtun"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :gtun)
+        when "bfgtequn"
+            riscv64EmitFPConditionalBranchForTest(operands, :s, :gtequn)
+        when "bdeq"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :eq)
+        when "bdneq"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :neq)
+        when "bdlt"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :lt)
+        when "bdlteq"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :lteq)
+        when "bdgt"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :gt)
+        when "bdgteq"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :gteq)
+        when "bdequn"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :equn)
+        when "bdnequn"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :nequn)
+        when "bdltun"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :ltun)
+        when "bdltequn"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :ltequn)
+        when "bdgtun"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :gtun)
+        when "bdgtequn"
+            riscv64EmitFPConditionalBranchForTest(operands, :d, :gtequn)
+        when "td2i", "bcd2i", "btd2i"
+            riscv64RaiseUnsupported
+        when "movdz"
+            riscv64RaiseUnsupported
+        when "pop"
+            size = 8 * operands.size
+            operands.each_with_index {
+                | op, index |
+                $asm.puts "ld #{op.riscv64Operand}, #{size - 8 * (index + 1)}(sp)"
+            }
+            $asm.puts "addi sp, sp, #{size}"
+        when "push"
+            size = 8 * operands.size
+            $asm.puts "addi sp, sp, #{-size}"
+            operands.reverse.each_with_index {
+                | op, index |
+                $asm.puts "sd #{op.riscv64Operand}, #{size - 8 * (index + 1)}(sp)"
+            }
+        when "move"
+            case riscv64OperandTypes(operands)
+            when [RegisterID, RegisterID]
+                $asm.puts "mv #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+            when [Immediate, RegisterID]
+                $asm.puts "li #{operands[1].riscv64Operand}, #{operands[0].riscv64Operand}"
+            else
+                riscv64RaiseMismatchedOperands(operands)
+            end
+        when "sxb2i"
+            riscv64EmitBitExtension(operands, :b, :w, :sign)
+        when "sxb2q"
+            riscv64EmitBitExtension(operands, :b, :d, :sign)
+        when "sxh2i"
+            riscv64EmitBitExtension(operands, :h, :w, :sign)
+        when "sxh2q"
+            riscv64EmitBitExtension(operands, :h, :d, :sign)
+        when "sxi2p", "sxi2q"
+            riscv64EmitBitExtension(operands, :w, :d, :sign)
+        when "zxi2p", "zxi2q"
+            riscv64EmitBitExtension(operands, :w, :d, :zero)
+        when "nop"
+            $asm.puts "nop"
+        when "bbeq"
+            riscv64EmitConditionalBranch(operands, :b, :eq)
+        when "bieq"
+            riscv64EmitConditionalBranch(operands, :w, :eq)
+        when "bpeq", "bqeq"
+            riscv64EmitConditionalBranch(operands, :d, :eq)
+        when "bbneq"
+            riscv64EmitConditionalBranch(operands, :b, :neq)
+        when "bineq"
+            riscv64EmitConditionalBranch(operands, :w, :neq)
+        when "bpneq", "bqneq"
+            riscv64EmitConditionalBranch(operands, :d, :neq)
+        when "bba"
+            riscv64EmitConditionalBranch(operands, :b, :a)
+        when "bia"
+            riscv64EmitConditionalBranch(operands, :w, :a)
+        when "bpa", "bqa"
+            riscv64EmitConditionalBranch(operands, :d, :a)
+        when "bbaeq"
+            riscv64EmitConditionalBranch(operands, :b, :aeq)
+        when "biaeq"
+            riscv64EmitConditionalBranch(operands, :w, :aeq)
+        when "bpaeq", "bqaeq"
+            riscv64EmitConditionalBranch(operands, :d, :aeq)
+        when "bbb"
+            riscv64EmitConditionalBranch(operands, :b, :b)
+        when "bib"
+            riscv64EmitConditionalBranch(operands, :w, :b)
+        when "bpb", "bqb"
+            riscv64EmitConditionalBranch(operands, :d, :b)
+        when "bbbeq"
+            riscv64EmitConditionalBranch(operands, :b, :beq)
+        when "bibeq"
+            riscv64EmitConditionalBranch(operands, :w, :beq)
+        when "bpbeq", "bqbeq"
+            riscv64EmitConditionalBranch(operands, :d, :beq)
+        when "bbgt"
+            riscv64EmitConditionalBranch(operands, :b, :gt)
+        when "bigt"
+            riscv64EmitConditionalBranch(operands, :w, :gt)
+        when "bpgt", "bqgt"
+            riscv64EmitConditionalBranch(operands, :d, :gt)
+        when "bbgteq"
+            riscv64EmitConditionalBranch(operands, :b, :gteq)
+        when "bigteq"
+            riscv64EmitConditionalBranch(operands, :w, :gteq)
+        when "bpgteq", "bqgteq"
+            riscv64EmitConditionalBranch(operands, :d, :gteq)
+        when "bblt"
+            riscv64EmitConditionalBranch(operands, :b, :lt)
+        when "bilt"
+            riscv64EmitConditionalBranch(operands, :w, :lt)
+        when "bplt", "bqlt"
+            riscv64EmitConditionalBranch(operands, :d, :lt)
+        when "bblteq"
+            riscv64EmitConditionalBranch(operands, :b, :lteq)
+        when "bilteq"
+            riscv64EmitConditionalBranch(operands, :w, :lteq)
+        when "bplteq", "bqlteq"
+            riscv64EmitConditionalBranch(operands, :d, :lteq)
+        when "btbz"
+            riscv64EmitConditionalBranchForTest(operands, :b, :z)
+        when "btbnz"
+            riscv64EmitConditionalBranchForTest(operands, :b, :nz)
+        when "btbs"
+            riscv64EmitConditionalBranchForTest(operands, :b, :s)
+        when "btiz"
+            riscv64EmitConditionalBranchForTest(operands, :w, :z)
+        when "btinz"
+            riscv64EmitConditionalBranchForTest(operands, :w, :nz)
+        when "btis"
+            riscv64EmitConditionalBranchForTest(operands, :w, :s)
+        when "btpz", "btqz"
+            riscv64EmitConditionalBranchForTest(operands, :d, :z)
+        when "btpnz", "btqnz"
+            riscv64EmitConditionalBranchForTest(operands, :d, :nz)
+        when "btps", "btqs"
+            riscv64EmitConditionalBranchForTest(operands, :d, :s)
+        when "baddiz"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :w, :add, :z)
+        when "baddinz"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :w, :add, :nz)
+        when "baddis"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :w, :add, :s)
+        when "baddio"
+            riscv64EmitOverflowBranchForOperation(operands, :w, :add)
+        when "baddpz", "baddqz"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :d, :add, :z)
+        when "baddpnz", "baddqnz"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :d, :add, :nz)
+        when "baddps", "baddqs"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :d, :add, :s)
+        when "baddpo", "baddqo"
+            riscv64RaiseUnsupported
+        when "bsubiz"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :w, :sub, :z)
+        when "bsubinz"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :w, :sub, :nz)
+        when "bsubis"
+            riscv64EmitConditionalBranchForAdditionOperation(operands, :w, :sub, :s)
+        when "bsubio"
+            riscv64EmitOverflowBranchForOperation(operands, :w, :sub)
+        when "bmuliz"
+            riscv64EmitConditionalBranchForMultiplicationOperation(operands, :w, :z)
+        when "bmulinz"
+            riscv64EmitConditionalBranchForMultiplicationOperation(operands, :w, :nz)
+        when "bmulis"
+            riscv64EmitConditionalBranchForMultiplicationOperation(operands, :w, :s)
+        when "bmulio"
+            riscv64EmitOverflowBranchForOperation(operands, :w, :mul)
+        when "boriz", "borinz", "boris", "borio"
+            riscv64RaiseUnsupported
+        when "jmp"
+            case riscv64OperandTypes(operands)
+            when [RegisterID, Immediate]
+                $asm.puts "jr #{operands[0].riscv64Operand}"
+            when [BaseIndex, Immediate, Immediate]
+                operands[0].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+                $asm.puts "ld x31, 0(x31)"
+                $asm.puts "jr x31"
+            when [Address, Immediate]
+                if operands[0].riscv64RequiresLoad
+                    operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+                    $asm.puts "ld x31, 0(x31)"
+                    $asm.puts "jr x31"
+                else
+                    $asm.puts "ld x31, #{operands[0].riscv64Operand}"
+                    $asm.puts "jr x31"
+                end
+            when [LabelReference]
+                $asm.puts "tail #{operands[0].asmLabel}"
+            when [LocalLabelReference]
+                $asm.puts "tail #{operands[0].asmLabel}"
+            else
+                riscv64RaiseMismatchedOperands(operands)
+            end
+        when "call"
+            case riscv64OperandTypes(operands)
+            when [RegisterID, Immediate]
+                $asm.puts "jalr #{operands[0].riscv64Operand}"
+            when [Address, Immediate]
+                if operands[0].riscv64RequiresLoad
+                    operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+                    $asm.puts "ld x31, 0(x31)"
+                    $asm.puts "jalr x31"
+                else
+                    $asm.puts "ld x31, #{operands[0].riscv64Operand}"
+                    $asm.puts "jalr x31"
+                end
+            when [LabelReference]
+                $asm.puts "call #{operands[0].asmLabel}"
+            else
+                riscv64RaiseMismatchedOperands(operands)
+            end
+        when "break"
+            $asm.puts "ebreak"
+        when "ret"
+            $asm.puts "ret"
+        when "cbeq"
+            riscv64EmitCompare(operands, :b, :eq)
+        when "cieq"
+            riscv64EmitCompare(operands, :w, :eq)
+        when "cpeq", "cqeq"
+            riscv64EmitCompare(operands, :d, :eq)
+        when "cbneq"
+            riscv64EmitCompare(operands, :b, :neq)
+        when "cineq"
+            riscv64EmitCompare(operands, :w, :neq)
+        when "cpneq", "cqneq"
+            riscv64EmitCompare(operands, :d, :neq)
+        when "cba"
+            riscv64EmitCompare(operands, :b, :a)
+        when "cia"
+            riscv64EmitCompare(operands, :w, :a)
+        when "cpa", "cqa"
+            riscv64EmitCompare(operands, :d, :a)
+        when "cbaeq"
+            riscv64EmitCompare(operands, :b, :aeq)
+        when "ciaeq"
+            riscv64EmitCompare(operands, :w, :aeq)
+        when "cpaeq", "cqaeq"
+            riscv64EmitCompare(operands, :d, :aeq)
+        when "cbb"
+            riscv64EmitCompare(operands, :b, :b)
+        when "cib"
+            riscv64EmitCompare(operands, :w, :b)
+        when "cpb", "cqb"
+            riscv64EmitCompare(operands, :d, :b)
+        when "cbbeq"
+            riscv64EmitCompare(operands, :b, :beq)
+        when "cibeq"
+            riscv64EmitCompare(operands, :w, :beq)
+        when "cpbeq", "cqbeq"
+            riscv64EmitCompare(operands, :d, :beq)
+        when "cblt"
+            riscv64EmitCompare(operands, :b, :lt)
+        when "cilt"
+            riscv64EmitCompare(operands, :w, :lt)
+        when "cplt", "cqlt"
+            riscv64EmitCompare(operands, :d, :lt)
+        when "cblteq"
+            riscv64EmitCompare(operands, :b, :lteq)
+        when "cilteq"
+            riscv64EmitCompare(operands, :w, :lteq)
+        when "cplteq", "cqlteq"
+            riscv64EmitCompare(operands, :d, :lteq)
+        when "cbgt"
+            riscv64EmitCompare(operands, :b, :gt)
+        when "cigt"
+            riscv64EmitCompare(operands, :w, :gt)
+        when "cpgt", "cqgt"
+            riscv64EmitCompare(operands, :d, :gt)
+        when "cbgteq"
+            riscv64EmitCompare(operands, :b, :gteq)
+        when "cigteq"
+            riscv64EmitCompare(operands, :w, :gteq)
+        when "cpgteq", "cqgteq"
+            riscv64EmitCompare(operands, :d, :gteq)
+        when "tbz"
+            riscv64EmitTest(operands, :b, :z)
+        when "tbnz"
+            riscv64EmitTest(operands, :b, :nz)
+        when "tiz"
+            riscv64EmitTest(operands, :w, :z)
+        when "tinz"
+            riscv64EmitTest(operands, :w, :nz)
+        when "tpz", "tqz"
+            riscv64EmitTest(operands, :d, :z)
+        when "tpnz", "tqnz"
+            riscv64EmitTest(operands, :d, :nz)
+        when "tbs", "tis", "tps", "tqs"
+            riscv64RaiseUnsupported
+        when "peek", "poke"
+            riscv64RaiseUnsupported
+        when "bo", "bs", "bz", "bnz"
+            riscv64RaiseUnsupported
+        when "leap", "leaq"
+            case riscv64OperandTypes(operands)
+            when [Address, RegisterID]
+                if operands[0].riscv64RequiresLoad
+                    operands[0].riscv64Load(RISCV64ScratchRegister.x31)
+                    $asm.puts "mv #{operands[1].riscv64Operand}, x31"
+                else
+                    $asm.puts "addi #{operands[1].riscv64Operand}, #{operands[0].base.riscv64Operand}, #{operands[0].offset.value}"
+                end
+            when [BaseIndex, RegisterID]
+                operands[0].riscv64Load(RISCV64ScratchRegister.x31, RISCV64ScratchRegister.x30)
+                $asm.puts "mv #{operands[1].riscv64Operand}, x31"
+            when [LabelReference, RegisterID]
+                $asm.puts "lla #{operands[1].riscv64Operand}, #{operands[0].asmLabel}"
+            else
+                riscv64RaiseMismatchedOperands(operands)
+            end
+        when "smulli"
+            riscv64RaiseUnsupported
+        when "memfence"
+            $asm.puts "fence rw, rw"
+        when "fence"
+            $asm.puts "fence"
+        when "bfiq"
+            riscv64RaiseUnsupported
+        when "pcrtoaddr"
+            case riscv64OperandTypes(operands)
+            when [LabelReference, RegisterID]
+                $asm.puts "lla #{operands[1].riscv64Operand}, #{operands[0].asmLabel}"
+            else
+                riscv64RaiseMismatchedOperands(operands)
+            end
+        when "globaladdr"
+            case riscv64OperandTypes(operands)
+            when [LabelReference, RegisterID]
+                $asm.puts "la #{operands[1].riscv64Operand}, #{operands[0].asmLabel}"
+            else
+                riscv64RaiseMismatchedOperands(operands)
+            end
+        when "lrotatei", "lrotateq"
+            riscv64RaiseUnsupported
+        when "rrotatei", "rrotateq"
+            riscv64RaiseUnsupported
+        when "moved"
+            riscv64EmitFPOperation(operands, "fmv.d")
+        when "loadf"
+            riscv64EmitFPLoad(operands, "flw")
+        when "loadd"
+            riscv64EmitFPLoad(operands, "fld")
+        when "storef"
+            riscv64EmitFPStore(operands, "fsw")
+        when "stored"
+            riscv64EmitFPStore(operands, "fsd")
+        when "addf"
+            riscv64EmitFPOperation([operands[0], operands[1], operands[1]], "fadd.s")
+        when "addd"
+            riscv64EmitFPOperation([operands[0], operands[1], operands[1]], "fadd.d")
+        when "subf"
+            riscv64EmitFPOperation([operands[1], operands[0], operands[1]], "fsub.s")
+        when "subd"
+            riscv64EmitFPOperation([operands[1], operands[0], operands[1]], "fsub.d")
+        when "mulf"
+            riscv64EmitFPOperation([operands[0], operands[1], operands[1]], "fmul.s")
+        when "muld"
+            riscv64EmitFPOperation([operands[0], operands[1], operands[1]], "fmul.d")
+        when "divf"
+            riscv64EmitFPOperation([operands[1], operands[0], operands[1]], "fdiv.s")
+        when "divd"
+            riscv64EmitFPOperation([operands[1], operands[0], operands[1]], "fdiv.d")
+        when "sqrtf"
+            riscv64EmitFPOperation(operands, "fsqrt.s")
+        when "sqrtd"
+            riscv64EmitFPOperation(operands, "fsqrt.d")
+        when "absf"
+            riscv64EmitFPOperation(operands, "fabs.s")
+        when "absd"
+            riscv64EmitFPOperation(operands, "fabs.d")
+        when "negf"
+            riscv64EmitFPOperation(operands, "fneg.s")
+        when "negd"
+            riscv64EmitFPOperation(operands, "fneg.d")
+        when "floorf"
+            riscv64EmitFPRoundOperation(operands, :s, "rdn")
+        when "floord"
+            riscv64EmitFPRoundOperation(operands, :d, "rdn")
+        when "ceilf"
+            riscv64EmitFPRoundOperation(operands, :s, "rup")
+        when "ceild"
+            riscv64EmitFPRoundOperation(operands, :d, "rup")
+        when "roundf"
+            riscv64EmitFPRoundOperation(operands, :s, "rne")
+        when "roundd"
+            riscv64EmitFPRoundOperation(operands, :d, "rne")
+        when "truncatef"
+            riscv64EmitFPRoundOperation(operands, :s, "rtz")
+        when "truncated"
+            riscv64EmitFPRoundOperation(operands, :d, "rtz")
+        when "truncatef2i"
+            riscv64EmitFPConvertOperation(operands, :s, :wu, "rtz")
+        when "truncated2i"
+            riscv64EmitFPConvertOperation(operands, :d, :wu, "rtz")
+        when "truncatef2q"
+            riscv64EmitFPConvertOperation(operands, :s, :lu, "rtz")
+        when "truncated2q"
+            riscv64EmitFPConvertOperation(operands, :d, :lu, "rtz")
+        when "truncatef2is"
+            riscv64EmitFPConvertOperation(operands, :s, :w, "rtz")
+        when "truncated2is"
+            riscv64EmitFPConvertOperation(operands, :d, :w, "rtz")
+        when "truncatef2qs"
+            riscv64EmitFPConvertOperation(operands, :s, :l, "rtz")
+        when "truncated2qs"
+            riscv64EmitFPConvertOperation(operands, :d, :l, "rtz")
+        when "ci2f"
+            riscv64EmitFPConvertOperation(operands, :wu, :s, :none)
+        when "ci2d"
+            riscv64EmitFPConvertOperation(operands, :wu, :d, :none)
+        when "ci2fs"
+            riscv64EmitFPConvertOperation(operands, :w, :s, :none)
+        when "ci2ds"
+            riscv64EmitFPConvertOperation(operands, :w, :d, :none)
+        when "cq2f"
+            riscv64EmitFPConvertOperation(operands, :lu, :s, :none)
+        when "cq2d"
+            riscv64EmitFPConvertOperation(operands, :lu, :d, :none)
+        when "cq2fs"
+            riscv64EmitFPConvertOperation(operands, :l, :s, :none)
+        when "cq2ds"
+            riscv64EmitFPConvertOperation(operands, :l, :d, :none)
+        when "cf2d"
+            riscv64EmitFPConvertOperation(operands, :s, :d, :none)
+        when "cd2f"
+            riscv64EmitFPConvertOperation(operands, :d, :s, :none)
+        when "tzcnti", "tzcntq"
+            riscv64RaiseUnsupported
+        when "lzcnti", "lzcntq"
+            riscv64RaiseUnsupported
+        when "andf"
+            riscv64EmitFPBitwiseOperation(operands, :s, "and")
+        when "andd"
+            riscv64EmitFPBitwiseOperation(operands, :d, "and")
+        when "orf"
+            riscv64EmitFPBitwiseOperation(operands, :s, "or")
+        when "ord"
+            riscv64EmitFPBitwiseOperation(operands, :d, "or")
+        when "cfeq"
+            riscv64EmitFPCompare(operands, :s, :eq)
+        when "cfneq"
+            riscv64EmitFPCompare(operands, :s, :neq)
+        when "cflt"
+            riscv64EmitFPCompare(operands, :s, :lt)
+        when "cflteq"
+            riscv64EmitFPCompare(operands, :s, :lteq)
+        when "cfgt"
+            riscv64EmitFPCompare(operands, :s, :gt)
+        when "cfgteq"
+            riscv64EmitFPCompare(operands, :s, :gteq)
+        when "cfnequn"
+            riscv64EmitFPCompare(operands, :s, :nequn)
+        when "cdeq"
+            riscv64EmitFPCompare(operands, :d, :eq)
+        when "cdneq"
+            riscv64EmitFPCompare(operands, :d, :neq)
+        when "cdlt"
+            riscv64EmitFPCompare(operands, :d, :lt)
+        when "cdlteq"
+            riscv64EmitFPCompare(operands, :d, :lteq)
+        when "cdgt"
+            riscv64EmitFPCompare(operands, :d, :gt)
+        when "cdgteq"
+            riscv64EmitFPCompare(operands, :d, :gteq)
+        when "cdnequn"
+            riscv64EmitFPCompare(operands, :d, :nequn)
+        when "fi2f"
+            riscv64EmitFPCopy(operands, :s)
+        when "ff2i"
+            riscv64EmitFPCopy(operands, :s)
+        when "fp2d", "fq2d"
+            riscv64EmitFPCopy(operands, :d)
+        when "fd2p", "fd2q"
+            riscv64EmitFPCopy(operands, :d)
+        when "tls_loadp", "tls_storep"
+            riscv64RaiseUnsupported
+        when "loadlinkacqb", "loadlinkacqh", "loadlinkacqi", "loadlinkacqq"
+            riscv64RaiseUnsupported
+        when "storecondrelb", "storecondrelh", "storecondreli", "storecondrelq"
+            riscv64RaiseUnsupported
+        when "atomicxchgaddb", "atomicxchgaddh", "atomicxchgaddi", "atomicxchgaddq"
+            riscv64RaiseUnsupported
+        when "atomicxchgclearb", "atomicxchgclearh", "atomicxchgcleari", "atomicxchgclearq"
+            riscv64RaiseUnsupported
+        when "atomicxchgorb", "atomicxchgorh", "atomicxchgori", "atomicxchgorq"
+            riscv64RaiseUnsupported
+        when "atomicxchgxorb", "atomicxchgxorh", "atomicxchgxori", "atomicxchgxorq"
+            riscv64RaiseUnsupported
+        when "atomicxchgb", "atomicxchgh", "atomicxchgi", "atomicxchgq"
+            riscv64RaiseUnsupported
+        when "atomicweakcasb", "atomicweakcash", "atomicweakcasi", "atomicweakcasq"
+            riscv64RaiseUnsupported
+        when "atomicloadb", "atomicloadh", "atomicloadi", "atomicloadq"
+            riscv64RaiseUnsupported
+        else
+            lowerDefault
+        end
+    end
+end
</ins></span></pre></div>
<a id="trunkSourceWTFChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/WTF/ChangeLog (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WTF/ChangeLog       2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/WTF/ChangeLog  2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -1,3 +1,14 @@
</span><ins>+2021-08-30  Zan Dobersek  <zdobersek@igalia.com>
+
+        RISCV64 support in LLInt
+        https://bugs.webkit.org/show_bug.cgi?id=229035
+        <rdar://problem/82120908>
+
+        Reviewed by Yusuke Suzuki.
+
+        * wtf/PlatformEnable.h:
+        Define ENABLE_LLINT_EMBEDDED_OPCODE_ID to 1 for CPU(RISCV64).
+
</ins><span class="cx"> 2021-08-28  Cameron McCormack  <heycam@apple.com>
</span><span class="cx"> 
</span><span class="cx">         Miscellaneous typo fixes
</span></span></pre></div>
<a id="trunkSourceWTFwtfPlatformEnableh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WTF/wtf/PlatformEnable.h (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WTF/wtf/PlatformEnable.h    2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/WTF/wtf/PlatformEnable.h       2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -884,7 +884,7 @@
</span><span class="cx">    that executes each opcode. It cannot be supported by the CLoop since there's no way to embed the
</span><span class="cx">    OpcodeID word in the CLoop's switch statement cases. It is also currently not implemented for MSVC.
</span><span class="cx"> */
</span><del>-#if !defined(ENABLE_LLINT_EMBEDDED_OPCODE_ID) && !ENABLE(C_LOOP) && !COMPILER(MSVC) && (CPU(X86) || CPU(X86_64) || CPU(ARM64) || (CPU(ARM_THUMB2) && OS(DARWIN)))
</del><ins>+#if !defined(ENABLE_LLINT_EMBEDDED_OPCODE_ID) && !ENABLE(C_LOOP) && !COMPILER(MSVC) && (CPU(X86) || CPU(X86_64) || CPU(ARM64) || (CPU(ARM_THUMB2) && OS(DARWIN)) || CPU(RISCV64))
</ins><span class="cx"> #define ENABLE_LLINT_EMBEDDED_OPCODE_ID 1
</span><span class="cx"> #endif
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourcecmakeWebKitFeaturescmake"></a>
<div class="modfile"><h4>Modified: trunk/Source/cmake/WebKitFeatures.cmake (281756 => 281757)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/cmake/WebKitFeatures.cmake  2021-08-30 14:59:41 UTC (rev 281756)
+++ trunk/Source/cmake/WebKitFeatures.cmake     2021-08-30 15:11:39 UTC (rev 281757)
</span><span class="lines">@@ -84,7 +84,7 @@
</span><span class="cx">         set(ENABLE_JIT_DEFAULT OFF)
</span><span class="cx">         set(ENABLE_FTL_DEFAULT OFF)
</span><span class="cx">         set(USE_SYSTEM_MALLOC_DEFAULT OFF)
</span><del>-        set(ENABLE_C_LOOP_DEFAULT ON)
</del><ins>+        set(ENABLE_C_LOOP_DEFAULT OFF)
</ins><span class="cx">         set(ENABLE_SAMPLING_PROFILER_DEFAULT OFF)
</span><span class="cx">     else ()
</span><span class="cx">         set(ENABLE_JIT_DEFAULT OFF)
</span></span></pre>
</div>
</div>

</body>
</html>