I feel like this project is a classic example of the README missing a "Why?" section. Not to justify the project, it just makes it hard to evaluate without understanding why they choose implementing a transpiler rather than an embedded WASM VM. Or why not transpile to native assembly? I'm sure they have good reasons but they aren't listed here.
i think your questions are implicitly answered in the top page of the readme, but by showing rather than telling
esp32 and stm32 run three different native assembly instruction sets (though this only supports two of them, the others listed in the first line of the readme are all arm)
the coremark results seem like an adequate answer to why you'd use a compiler rather than an interpreter
i do think it would be clearer, though, to explain up front that the module being built is not made of python code, but rather callable from python code
There's no scenario where a bespoke WASM interpreter is slower than MicroPython - though this isn't really Python.
WASM is an almost "ideal" bytecode target for embedded AOT native compilation as well, though yeah, you'd have to implement all of the backend targets.
maybe if you don't have enough space in flash for both the μpython interpreter and the wasm interpreter? but, yeah, this isn't being compiled to python but to c (for invocation from python)
> i do think it would be clearer, though, to explain up front that the module being built is not made of python code, but rather callable from python code
I know nothing about micropython. Do the modules contain bytecode?
In this case, the modules contain native code compiled for the target architecture. Micropython has something approximating a dynamic linker to load them.
.mpy modules can contain MicroPython bytecode and/or native machine code. In this case, WASM is compiled (via C) to native code. So the performance is very good, much better than interpreting either MicroPython bytecode or WASM bytecode.
The conventional way of creating native modules for MicroPython is to write them in C. This work allows to use any language that supports WASM output target.
Thanks for the feedback.
I'll improve the README based on your inputs. I was focused on the actual research.
Overall, this allows writing code in statically compiled languages and running it (fast) on embedded systems with MicroPython.
MicroPython itself is comparatively slow, and this provides tools to deliver more demanding software (AI, signal processing, etc).
There exists tooling already to compile for wasm from static languages (rust, C++, and so on) and there exists tooling to run this WASM on a raspberry pi (wasmer, etc).
That makes me feel the end goal here is not what's described ("make wasm so it can run on a raspberry"), but rather "make wasm run in micropython".
Curious if you've seen the WAMR runtime which is explicitly designed to be lightweight enough for freestanding embedded systems on some higher end microcontrollers (Cortex-M4F).
Okay great! But I'm not using MicroPython, just compiling directly for the ESP32, would that mean that to use wasm2mpy I would need to add a MicroPython VM in my code?
would you say that using wasm2mpy is better suited for my needs than having a wasm runtime like the one of bytecodealliance? What would be the advantage in this case?
Yes, for these microcontroller-type chips, ahead-of-time compilation would be definitely more suitable. WAMR probably supports it, but you can also to it via WASM2C, easily. I prefer not depending on WAMR, it is very bloated
Jokes aside, when I first saw this I also assumed that WASM was being transpiled into a subset of micropython as some sort of homage to WASM's asm.js roots. The explanations in this thread make much more sense.
Like several others I'm confused about the tagline for this. If I understand correctly, it's not compiling "WASM to micropython", but to native binary code with generated micropython bindings?