Multi-Value All The Wasm!
Note: I am cross-posting this to my personal blog from the Bytecode Alliance blog and the Mozilla Hacks blog.
Multi-value is a proposed extension to core WebAssembly that enables functions to return many values, among other things. It is also a pre-requisite for Wasm interface types.
I’ve been adding multi-value support all over the place recently:
-
I added multi-value support to all the various crates in the Rust and WebAssembly toolchain, so that Rust projects can compile down to Wasm code that uses multi-value features.
-
I added multi-value support to Wasmtime, the WebAssembly runtime built on top of the Cranelift code generator, so that it can run Wasm code that uses multi-value features.
Now, as my multi-value efforts are wrapping up, it seems like a good time to reflect on the experience and write up everything that’s been required to get all this support in all these places.
Wait — What is Multi-Value Wasm?
In core WebAssembly, there are a couple of arity restrictions on the language:
- functions can only return either zero or one value, and
- instruction sequences like
block
s,if
s, andloop
s cannot consume any stack values, and may only produce zero or one resulting stack value.
The multi-value proposal is an extension to the WebAssembly standard that lifts these arity restrictions. Under the new multi-value Wasm rules:
- functions can return an arbitrary number of values, and
- instruction sequences can consume and produce an arbitrary number of stack values.
The following snippets are only valid under the new rules introduced in the multi-value Wasm proposal:
The multi-value proposal is currently at phase 3 of the WebAssembly standardization process.
But Why Should I Care?
Code Size
There are a few scenarios where compilers are forced to jump through hoops when producing multiple stack values for core Wasm. Workarounds include introducing temporary local variables, and using local.get
and local.set
instructions, because the arity restrictions on blocks mean that the values cannot be left on the stack.
Consider a scenario where we are computing two stack values: the pointer to a string in linear memory, and its length. Furthermore, imagine we are choosing between two different strings (which therefore have different pointer-and-length pairs) based on some condition. But whichever string we choose, we’re going to process the string in the same fashion, so we just want to push the pointer-and-length pair for our chosen string onto the stack, and control flow can join afterwards.
With multi-value, we can do this in a straightforward fashion:
This encoding is also compact: only sixteen bytes!
When we’re targeting core Wasm, and multi-value isn’t available, we’re forced to pursue alternative, more convoluted forms. We can smuggle the stack values out of each if
and else
arm via temporary local values:
This encoding requires 30 bytes, an overhead of fourteen bytes more than the ideal multi-value version. And if we were computing three values instead of two, there would be even more overhead, and the same is true for four values, etc… The additional overhead is proportional to how many values we’re producing in the if
and else
arms.
We can actually go a little smaller than that — still with core Wasm — by jumping through a different hoop. We can split this into two if ... else ... end
blocks and duplicate the condition check to avoid introducing temporaries for each of the computed values themselves:
This gets us down to 28 bytes. Two fewer than the last version, but still an overhead of twelve bytes compared to the multi-value encoding. And the overhead is still proportional to how many values we’re computing.
There’s no way around it: we need multi-value to get the most compact code here.
New Instructions
The multi-value proposal opens up the possibility for new instructions that produce multiple values:
-
An
i32.divmod
instruction of type[i32 i32] -> [i32 i32]
that takes a numerator and divisor and produces both their quotient and remainder. -
Arithmetic operations with an additional carry result. These could be used to better implement big ints, overflow checks, and saturating arithmetic.
Returning Small Structs More Efficiently
Returning multiple values from functions will allow us to more efficiently return small structures like Rust’s Result
s. Without multi-value returns, these relatively small structs that still don’t fit in a single Wasm value type get placed in linear memory temporarily. With multi-value returns, the values don’t escape to linear memory, and instead stay on the stack. This can be more efficient, since Wasm stack values are generally more amenable to optimization than loads and stores from linear memory.
Interface Types
Shrinking code size is great, and new instructions would be fancy, but here’s what I’m really excited about: WebAssembly interface types. Interface types used to be called “host bindings,” and they are the key to unlocking:
- direct, optimized access to the browser’s DOM methods on the Web,
- “shared-nothing linking” of WebAssembly modules, and
- defining language-neutral interfaces, like WASI.
For all three use cases, we might want to return a string from a callee Wasm module. The caller that is consuming this string might be a Web browser, or it might be another Wasm module, or it might be a WASI-compatible Wasm runtime. In any case, a natural way to return the string is as two i32
s:
- a pointer to the start of the string in linear memory, and
- the byte length of the string.
The interface adapter can then lift that pair of i32
s into an abstract string type, and then lower it into the caller’s concrete string representation on the other side. Interface types are designed such that in most cases, this lifting and lowering can be optimized into a quick memory copy from the callee’s linear memory to the caller’s.
But before the interface adapters can do that lifting and lowering, they need access to the pointer and length pair, which means the callee Wasm function needs to return two values, which means we need multi-value Wasm for interface types.
All The Implementing!
Now that we know what multi-value Wasm is, and why it’s exciting, I’ll recount the tale of implementing support for it all over the place. I started with implementing multi-value support in the Rust and WebAssembly toolchain, and then I added support to the Wasmtime runtime, and the Cranelift code generator it’s built on top of.
Rust and WebAssembly Toolchain
What falls under the Rust and Wasm toolchain umbrella? It is a superset of the general Rust toolchain:
cargo
: Manages builds and dependencies.rustc
: Compiles Rust sources into code.- LLVM: Used by
rustc
under the covers to optimize and generate code.
And then additionally, when targeting Wasm, we also use a few more moving parts:
wasm-bindgen
: Part library and part Wasm post-processor,wasm-bindgen
generates bindings for consuming and producing interfaces defined with interface types (and much more!)walrus
: A library for transforming and rewriting WebAssembly modules, used bywasm-bindgen
’s post-processor.wasmparser
: An event-style parser for WebAssembly binaries, used bywalrus
.
Here’s a summary of the toolchain’s pipeline, showing the inputs and outputs between tools:
My goal is to unlock interface types with multi-value functions. For now, I haven’t been focusing on code size wins from generating multi-value blocks. For my purposes, I only need to introduce multi-value functions at the edges of the Wasm module that talk to interface adapters; I don’t need to make all function bodies use the optimal multi-value instruction sequence constructs. Therefore, I decided to have wasm-bindgen
’s post-processor rewrite certain functions to use multi-value returns, rather than add support in LLVM.0 With this approach I only needed to add support to the following tools:
cargo
rustc
LLVMwasm-bindgen
walrus
wasmparser
wasmparser
wasmparser
is an event-style parser for WebAssembly binaries. It may seem strange that adding toolchain support for generating multi-value Wasm began with parsing multi-value Wasm. But it is necessary to make testing easy and painless, and we needed it eventually for Wasmtime anyways, which also uses wasmparser
.
In core Wasm, the optional value type result of a block
, loop
, or if
is encoded directly in the instruction:
- a
0x40
byte means there is no result - a
0x7f
byte means there is a singlei32
result - a
0x7e
byte means there is a singlei64
result - etc…
With multi-value Wasm, there are not only zero or one resulting value types, there are also parameter types. Blocks can have the same set of types that functions can have. Functions already de-duplicate their types in the “Type” section of a Wasm binary and reference them via index. With multi-value, blocks do that now as well. But how does this co-exist with non-multi-value block types?
The index is encoded as a signed variable-length integer, using the LEB128 encoding. If we interpret non-multi-value blocks’ optional result value type as a signed LEB128, we get:
-64
(the smallest number that can be encoded as a single byte with signed LEB128) means there is no result-1
means there is a singlei32
result-2
means there is a singlei64
result- etc..
They’re all negative, leaving the positive numbers to be interpreted as indices into the “Type” section for multi-value blocks! A nice little encoding trick and bit of foresight from the WebAssembly standards folks.
Adding support for parsing these was straightforward, but wasmparser
also supports validating the Wasm as it parses it. Adding validation support was a little bit more involved.
wasmparser
’s validation implementation is similar to the validation algorithm presented in the appendix of the WebAssembly spec: it abstractly interprets the Wasm instructions, maintaining a stack of types, rather than a stack of values. If any operation uses operands of the wrong type — for example the stack has an f32
at its top when we are executing an i32.add
instruction, and therefore expect two i32
s on top of the stack — then validation fails. If there are no type errors, then it succeeds. There are some complications when dealing with stack-polymorphic instructions, like drop
, but they don’t really interact with multi-value.
Whenever wasmparser
encounters a block
, loop
, or if
instruction, it pushes an associated control frame, that keeps track of how deep in the stack instructions within this block can access. Before multi-value, the limit was always the length of the stack upon entering the block, because blocks didn’t take any values from the stack. With multi-value, this limit becomes stack.len() - block.num_params()
. When exiting a block, wasmparser
pops the associated control frame. It check that the top n
types on the stack match the block’s result types, and that the stack’s length is frame.depth + n
. Before multi-value, n
was always either 0
or 1
, but now it can be any non-negative integer.
The final bit of validation that is impacted by multi-value is when an if
needs to have an else
or not. In core Wasm, if the if
does not produce a resulting value on the stack, it doesn’t need an else
arm since the whole if
’s typing is [] -> []
which is also the typing for a no-op. With multi-value this is generalized to any if
where the inputs and outputs are the same types: [t*] -> [t*]
. Easy to implement, but also very easy to overlook (like I originally did!)
Multi-value support was added to wasmparser
in these pull requests:
walrus
walrus
is a WebAssembly to WebAssembly transformation library. We use it to generate glue code in wasm-bindgen
and to polyfill WebAssembly features.
walrus
constructs its own intermediate representation (IR) for WebAssembly. Similar to how wasmparser
validates Wasm instructions, walrus
also abstractly interprets the instructions while building up its IR. This meant that adding support for constructing multi-value IR to walrus
was very similar to adding multi-value validation support to wasmparser
. In fact, walrus
also validates the Wasm while it is constructing its IR.
But multi-value has big implications for the IR itself. Before multi-value, you could view Wasm’s stack-based instructions as a post-order encoding of an expression tree.
Consider the expression (a + b) * (c - d)
. As an expression tree, it looks like this:
A post-order traversal of a tree is where a node is visited after its children. A post-order traversal of our example expression tree would be:
Assume that a
, b
, c
, and d
are Wasm locals of type i32
, with the values 9
, 7
, 5
, and 3
respectively. We can convert this post-order directly into a sequence of Wasm instructions that build up their results on the Wasm stack:
This correspondence between trees and Wasm stack instructions made using a tree-like IR in walrus
, where nodes are instructions and a node’s children are the instructions that produce the parent’s input values, very natural.1 Our IR used to look something like this:
But multi-value threw a wrench in this tree-like representation: now that an instruction can produce multiple values, when we have a parent⟶child
edge in the tree, how do we know which of the child’s resulting values the parent wants to use? And also, if two different parents are each using one of the two values an instruction generates, we fundamentally don’t have a tree anymore, we have a directed, acyclic graph (DAG).
We considered generalizing our tree representation into a DAG, and labeling edges with n to represent using the nth resulting value of an instruction. We weighed the complexity of implementing this representation against what our current use cases in wasm-bindgen
demand, along with any future use cases we could think of. Ultimately, we decided it wasn’t worth the effort, since we don’t need that level of detail for any of the transformations or manipulations that wasm-bindgen
performs, or that we foresee it doing in the future.
Instead, we decided that within a block, representing instructions as a simple list is good enough for our use cases, so now our IR looks something like this:
Additionally, it turns out it is faster to construct and traverse this list-based representation, so switching representations in walrus
also gave wasm-bindgen
a nice little speed up.
The walrus
support for multi-value was implemented in these pull requests:
wasm-bindgen
wasm-bindgen
facilitates high-level interactions between Wasm modules and their host. Often that host is a Web browser and its DOM methods, or some user-written JavaScript. Other times it is an outside-the-Web Wasm runtime, like Wasmtime, using WASI and interface types. wasm-bindgen
acts as a polyfill for the interface types proposal, plus some extra batteries included for a powerful user experience.
One of wasm-bindgen
’s responsibilities is translating the return value of a Wasm function into something that the host caller can understand. When using interface types directly with Wasmtime, this means generating interface adapters that lift the concrete Wasm return values into abstract interface types. When the caller is some JavaScript code on the Web, it means generating some JavaScript code to convert the Wasm values into JavaScript values.
Let’s take a look at some Rust functions and the Wasm they get compiled down into.
First, consider when we are returning a single integer from a Rust function:
And here is the disassembly of that Rust code compiled to Wasm:
The resulting Wasm function’s signature is effectively identical to the Rust function’s signature. No surprises here. It is easy for wasm-bindgen
to translate the resulting Wasm value to whatever is needed because wasm-bindgen
has direct access to it; it’s right there.
Now let’s look at returning compound structures from Rust that don’t fit in a single Wasm value:
And here is the disassembly of this new Rust code compiled to Wasm:
The signature for the make_pair
function in pair.wasm
doesn’t look like its corresponding signature in pair.rs
! It has three parameters instead of two, and it isn’t returning any values, let alone a pair.
What’s happening is that LLVM doesn’t support multi-value yet so it can’t return two i32
s directly from the function. Instead, callers pass in a “struct return” pointer to some space that they’ve reserved for the return value, and make_pair
will write its return value through that struct return pointer into the reserved space. By convention, LLVM uses the first parameter as the struct return pointer, so the second Wasm parameter is our original a
parameter in Rust and the third Wasm parameter is our original b
parameter in Rust. We can see that the Wasm function is writing the b
field first, and then the a
field second.
How is space reserved for the struct return? Distinct from the Wasm standard’s stack that instructions push values to and pop values from, LLVM emits code to maintain a “shadow stack” in linear memory. There is a global dedicated as the stack pointer, and always points to the top of the stack. Non-leaf functions that need some scratch space of their own will decrement the stack pointer to allocate some space on entry (since the stack grows down, and its “top” of the stack is its lowest address) and will increment it to deallocate that space on exit. Leaf functions that don’t call any other function can skip incrementing and decrementing this stack pointer, which is exactly why we didn’t see make_pair
messing with the stack pointer.
To verify that callers are allocating space for the return struct in the shadow stack, let’s create a function that calls make_pair
and then inspect its disassembly:
I’ve annotated default_make_pair
’s disassembly below to make it clear how the shadow stack pointer is manipulated to create space for return values and how the pointer to that space is passed to make_pair
:
When the caller is JavaScript, wasm-bindgen
can use its knowledge of these calling conventions to generate JavaScript glue code that allocates shadow stack space, calls the function with the struct return pointer argument, reads the values out of linear memory, and finally deallocates the shadow stack space before converting the Wasm values into some JavaScript value.
But when using interface types directly, rather than polyfilling them, we can’t rely on generating glue code that has access to the Wasm module’s linear memory. First, the memory might not be exported. Second, the only glue code we have is interface adapters, not arbitrary JavaScript code. We want those values as proper return values, rather than through a side channel.
So I wrote a walrus
transform in wasm-bindgen
that converts functions that use a struct return pointer parameter without any actual Wasm return values, into multi-value functions that don’t take a struct return pointer parameter but return multiple resulting Wasm values instead. This transform is essentially a “reverse polyfill” for multi-value functions.
The transform is only applied to exported functions that take a struct return pointer parameter, and rather than rewriting the source function in place, the transform leaves it unmodified but removes it from the Wasm module’s exports list. It generates a new function that replaces the old one in the Wasm module’s exports list. This new function allocates shadow stack space for the return value, calls the original function, reads the values out of the shadow stack onto the Wasm value stack, and finally deallocates the shadow stack space before returning.
For our running make_pair
example, the transform produces an exported wrapper function like this:
With this transform in place, wasm-bindgen
can now generate multi-value function exports along with associated interface adapters that lift the concrete Wasm return values into abstract interface types.
The multi-value support and transform were implemented in wasm-bindgen
in these pull requests:
- #1764: Introduce a multi-value transform
- #1805: Always use multi-value when targeting interface types
- #1839: Update binding metadata after multi-value transform
Wasmtime and Cranelift
Ok, so at this point, we can generate multi-value Wasm binaries with the Rust and Wasm toolchain — woo! But now we need to be able to run these binaries.
Enter Wasmtime, the WebAssembly runtime built on top of the Cranelift code generator. Wasmtime translates WebAssembly into Cranelift’s IR with the cranelift-wasm
crate, and then Cranelift compiles the IR down to native machine code.
Implementing multi-value Wasm support in Wasmtime and Cranelift roughly involved two steps:
- Translating multi-value Wasm into Cranelift IR
- Supporting arbitrary numbers of return values in Cranelift
Translating Multi-Value Wasm into Cranelift IR
Cranelift has its own intermediate representation that it manipulates, optimizes, and legalizes before generating machine code for the target architecture. In order for Cranelift to compile some code, you need to translate whatever you’re working with into Cranelift’s IR. In our case, that means translating Wasm into Cranelift’s IR. This process is analogous to rustc
converting its mid-level intermediate representation (MIR) to LLVM’s IR.2
Cranelift’s IR is made up of (extended) basic blocks3 containing code in single, static-assignment form (SSA). SSA, as the name implies, means that variables can only be assigned to when defined, and can’t ever be re-assigned:
When translating to SSA form, most re-assignments to a variable x
can be handled by defining a fresh x1
and replacing subsequent uses of x
with x1
, and then turning the next re-assignment into x2
, etc. But that doesn’t work for points where control flow joins, such as the block following the consequent and alternative arms of an if
/else
.
Consider this Rust code, and how we might translate it into SSA:
Should the do_stuff
call at the bottom use x0
or x1
when translated into SSA? Neither!
SSA uses Φ (phi) functions to handle these cases. A phi function takes a number of mutually exclusive, control flow-dependent parameters and returns the one that was defined where control flow came from. In our example we would have x2 = Φ(x0, x1)
, and if some_condition()
was true then x2
would get its value from x0
. Otherwise, x2
would get its value from x1
.
If SSA and phi functions are new to you and you’re feeling confused, don’t worry! It was confusing for me too when I first learned about this stuff. But Cranelift IR doesn’t use phi functions per se, it has something that I think is more intuitive: blocks can have formal parameters.
Translating our example to Cranelift IR, we get this:
Note that ebb3
takes a parameter for the control flow-dependent value that we should pass to do_stuff
! And the jumps in ebb1
and ebb2
pass their locally-defined values “into” ebb3
! This is equivalent to phi functions, but I find it much more intuitive.
Anyways, translating WebAssembly code into Cranelift IR happens in the cranelift-wasm
crate. It uses wasmparser
to decode the given blob of Wasm and validate it, and then constructs Cranelift IR via (you guessed it!) abstract interpretation. As cranelift-wasm
interprets Wasm instructions, rather than pushing and popping Wasm values, it maintains a stack of Cranelift IR SSA values. As cranelift-wasm
enters and exits Wasm control frames, it creates Cranelift IR basic blocks.
This process is fairly similar to walrus
’s IR construction, which was pretty similar to wasmparser
’s validation, and the whole thing felt pretty familiar by now. There were just a couple tricky bits.
The first tricky bit was remembering to add parameters (phi functions) to the first basic block for a Wasm loop
’s body, representing its Wasm stack parameters. This is necessary, because control flow joins from two places at the top of the loop body: from where we were when we first entered the loop, and from the bottom of the loop when we finish an iteration and are starting another. In terms of the abstract interpretation, this means you need to pop off the particular SSA values you have on the stack at the start of the loop, construct SSA values for the loop’s parameters, and then push those onto the stack instead. I originally overlooked this, resulting in a fair bit of head scratching and debugging mis-translated IR. Whoops!
Second, cranelift-wasm
will track reachability during translation, and if some Wasm code is unreachable, we don’t even bother constructing Cranelift IR for it. But that boundary between unreachable and reachable code, and when one transitions to the other, can be a bit subtle. You can be in an unreachable state, fall through the current block into the following block, and become reachable once again. Throw in if
s with else
s, and if
s without else
s, and unconditional branches, and early returns, and it is easy for bugs to sneak in. And in the process of adding multi-value Wasm support, bugs did, in fact, sneak in. This time involving an if
that was initially reachable, and whose consequent arm also ends reachable, but whose alternative arm ends unreachable. Given that, should the block following the consequent and alternative be reachable? Yes, but we were incorrectly computing that it shouldn’t be.
To fix this bug, I refactored how cranelift-wasm
computes reachablity of code following an if
. It now correctly determines that the following block is reachable if the head of the if
is reachable and any of the following are true:
- The consequent or alternative end reachable, in which case they will continue to the following block.
- The consequent or alternative do an early branch (potentially a conditional branch) to the following block, and that branch is reachable.
- There is no alternative, so if the
if
’s condition is false, we go directly to the following block.
To be sure that we are handling all these edge cases correctly, I added tests enumerating every combination of reachability of an if
’s arms as well as early branches. Phew!
Finally, this bug first manifested itself in a 39 KiB Wasm file, and figuring out what was going on was made so much easier thanks to tools like wasm-reduce
(a tool that is part of binaryen) and creduce
(working on the WAT disassembly, rather than the binary Wasm). I forget which one I used this time, but I’ve successfully used both to turn big, complicated Wasm test cases into small, isolated test cases that highlight the bug at hand. These tools are real life savers so it is worth broadcasting their existence just in case anyone doesn’t know about them!
Translating multi-value Wasm into Cranelift IR happened in these pull requests:
- #1049: Translate multi-value Wasm into Cranelift IR
- #1110: Correctly jump to the destination block at the end of the consequent
- #1143: Fix reachability tracking for
if
s
Supporting Many Return Values in Cranelift
Cranelift IR the language supports returning arbitrarily many values from a function, but Cranelift the implementation only supported returning as many values as there are available registers in the calling convention that the function is using. For example, with the System V calling convention, you could return up to three pointer-sized values, and with the Windows fastcall calling convention, you could only return a single pointer-sized value.
So the question was:
How to return more values than can fit in registers?
This should trigger some deja vu: when compiling to Wasm, how was LLVM returning structures larger than could fit in a single Wasm value? Struct return pointer parameters! This is nothing new, and in fact its use is dictated by certain calling conventions, we just hadn’t implemented support for it in Cranelift yet. So that’s what I set out to do.
When Cranelift is given some initial IR, the IR is generally portable and machine independent. As the IR moves through Cranelift, eventually it reaches a legalization phase where instructions that don’t have a direct mapping to an machine code instruction in the target architecture are replaced with ones that do. For example, on 32-bit x86, Cranelift legalizes 64-bit arithmetic by expanding it into a series of 32-bit operations. During this process, we also legalize function signatures: passing a value that is larger than can fit in a register may need to be split into multiple parameters, each of which can fit in registers, for example. Signature legalization also assigns locations to formal parameters based on the function’s calling convention: this parameter should be in this register, and that parameter should be at this stack offset, etc.
My plan for implementing arbitrary numbers of return values via struct return pointer parameters was to hook into Cranelift’s legalization phase during signature legalization, legalizing return
instructions, and legalizing call
instructions.
When legalizing signatures, we need to determine whether a struct return pointer is required, and if so, update the signature to reflect that.
Here, fast
means the signature is using our internal, unstable “fast” calling convention. The sret
is an annotation for a parameter or return value, in this case documenting that it is being used as a struct return pointer. The %rdi
and %rax
are the registers assigned to the parameter and return value by the calling convention.4
After legalization, we’ve added the struct return pointer parameter, but we also removed the old returns, and we also return the struct return pointer parameter as well. Returning the struct return pointer is mandated by the System V ABI’s calling conventions, but we currently do the same thing for our internal, unstable calling convention as well.
After signatures are legalized, we need to legalize call
and return
instructions as well, so that they match the new, legalized signatures. Let’s turn our attention to the latter first.
Legalizing a return
instruction removes the return values from the return
instruction itself, and creates a series of preceding store
instructions that write the return values through the struct return pointer. Here’s an example that is returning four i32
values:
The new v4
value is the struct return pointer parameter. The notrap
annotation on the store
instruction is saying that this store can’t trigger a trap. It is the caller’s responsibility to give us a valid struct return pointer that is pointing to enough space to fit all of our return values. The aligned
annotation is similar, saying that the pointer we are storing through is properly four-byte aligned for an i32
. Again, the responsibility is on the caller to ensure the struct return pointer has at least the maximum alignment required by the return values’ types. The +4
, +8
, and +12
are static immediates that specify an offset to be added to the actual v4
operand to compute the destination address for the store.
Legalizing a call
instruction has comparatively more responsibilities than legalizing a return
instruction. Yes, it involves adding the struct return pointer argument to the call
instruction itself, and then loading the values out of the struct return space after the callee returns to us. But it additionally must allocate the space for the struct return in the caller function’s stack frame, and it must ensure that the size and alignment invariants that the callee and its return
instructions rely on are upheld.
Let’s take a look at an example of some caller function calling a callee function that returns four i32
s:
The ss0 = sret_slot 16
is a sixteen byte stack slot that we created for the struct return space. It is also aligned to sixteen bytes, which is greater than necessary in this case, since we only need four byte alignment for the i32
s. Similar to the store
s in the legalized return
, the load
s in the legalized call
are also annotated with notrap
and aligned
. v0 -> v6
establishes that v0
is another name for v6
, and we don’t have to eagerly rewrite all the following uses of v0
into uses of v6
(even though there don’t happen to be any in this particular example).
With signature, call
, and return
legalization that all understand when and how to use struct return pointers, we now have full support for arbitrarily many multi-value returns in Cranelift and Wasmtime. This support was implemented in these pull requests:
- #1147: Support many multi-value returns with struct return pointers
- #1213:
legalize_signatures
: Optimistically try and assign register locations to return values; backtrack to use struct-return pointer parameter
Putting It All Together
Finally, let’s put everything together and create a multi-value Wasm binary with the Rust and Wasm toolchain and then run it in Wasmtime!
First, let’s create a new library crate with cargo
:
We’re going to use wasm-bindgen
to return a string from our Wasm function, so lets add it as a dependency. Additionally, we’re going to create a Wasm library, rather than an executable, so specify that this is a “cdylib”:
Let’s fill out src/lib.rs
with our string-returning function:
We can build our Wasm library with cargo wasi
:
This will automatically build a Wasm file for the wasm32-wasi
target and then run wasm-bindgen
’s post-processor to add interface types and introduce multi-value. We can verify this with the wasm-objdump
tool from WABT:
We can see that the <hello multivalue shim>
function is exported as "hello"
and that it has the multi-value type (i32, i32) -> (i32, i32)
. This shim function is indeed the one introduced by our multi-value transform we added to wasm-bindgen
to wrap the original <hello>
function and turn its struct return pointer into multi-value.
Finally, we can load this Wasm library into Wasmtime, which will use Cranelift to just-in-time (JIT) compile it to machine code, and then invoke the hello
export with the string "multi-value Wasm"
:
It works!!
Conclusion
The Rust and WebAssembly toolchain now supports generating Wasm binaries that make use of the multi-value proposal, and Cranelift and Wasmtime can compile and run multi-value Wasm binaries. This has been — I hope! — an interesting tale of implementing a Wasm feature through the whole vertical ecosystem, start to finish.
Lastly, and definitely not leastly, I’d like to thank Dan Gohman, Benjamin Bouvier, Alex Crichton, Yury Delendik, @bjorn3, and @iximeow for providing reviews and implementation suggestions for different pieces of this journey at various stages. Additionally, thanks again to Alex and Dan, and to Lin Clark and Till Schneidereit for all providing feedback on early drafts of this piece.
0 Additionally, Thomas Lively and some other folks are already working on adding multi-value Wasm support directly to LLVM, so that is definitely coming in the future, and it made sense for me to focus my attention elsewhere. ↩
1 There are some “stack-y” forms that don’t quite directly map to a tree. For example, you can insert stack-neutral, side-effectual instruction sequences in the middle of any part of the post-order encoding of an expression tree. Here is a call
that produces some value, followed by a drop
of that value, inserted into the middle of the post-order encoding of 1 + 2
:
These stack-y forms can be represented by introducing blocks that don’t also introduce labels for control flow branches. You can think of them as sort of similar to Common Lisp’s progn
and prog0
forms or Scheme’s (begin ...)
↩
2 Fun fact: there is also ongoing work to make Cranelift a viable alternative backend for rustc
! See the goals write up and the bjorn3/rustc_codegen_cranelift
repo for details. ↩
3 Originally, Cranelift was designed to use extended basic blocks, rather than regular basic blocks. Both can only be entered at their head, but basic blocks additionally can only exit at their tail, while extended basic blocks can have conditional exits from the block in their middle. The idea is that extended basic blocks more directly match machine code which falls through untaken conditional branches to continue executing the next instruction. However, Cranelift is in the process of switching over to regular basic blocks, and removing support for extended basic blocks. The reasoning is that all its optimization passes end up essentially constructing and keeping track of basic blocks anyways, which added complexity, and the extended basic blocks weren’t ultimately carrying their weight. ↩
4 Semi-confusingly, the square brackets are just the syntax that Cranelift decided to use to surround parameter locations, and they do not represent dereferencing the way they would in Intel-flavored assembly syntax. ↩