Improving that problem probably would've been a massive undertaking. That aside, there's the problem that implementing kernel mechanics is a lot more than faking syscalls: the various types of namespaces, FUSE, random edge cases that applications do expect, kernel modules, etc. At the end of the day, users don't want to stumble into some weird compatibility issue because they're running not-quite-Linux; it's a better UX to just offer normal Linux with better integration.
The WSL2 design isn't stupid, it's practical. What I will give you is that it's not elegant in an "ivory tower of ideal computing" sense.
When people talk about improved compatibility or higher practicality I wonder why they don’t just run Linux on metal at that point. You can either run it on your laptop, or connect to a networked computer.
Your serial might have worked, but your docker didn’t. (And someone else’s other drivers didn’t, and mmapping had ever-so-slightly different semantics causing rare and hard to reproduce issues).
WSL2, on the whole, is much more compatible. If you want 100% Linux compatibility, just run Linux.
I do. That's why I didn't know the current answer to the question. But I use software that wants to talk to hardware, not just cloud software that might as well be on a vps.
Calling gcc (which runs entirely happily in WSL2) "cloud software that might as well be on a vps" is at the same time accurate and apparently insulting.
The WSL2 design isn't stupid, it's practical. What I will give you is that it's not elegant in an "ivory tower of ideal computing" sense.