In the case of network cards, it will be something like this:
Application → Haiku network stack → network driver → fake, emulated network card → ethernet packets injected by qemu into Linux in some way → linux network stack → linux Wifi interface driver → finally the packet is sent into the actual network
This is a relatively simple case, because network packets are designed to easily go from machine to machine, and a virtual machine is not very different from a physical machines routing its packets through another one.
For accelerated 3D rendering? That’s a lot more complex. Graphics cards are complicated things. The protocol between the driver and the graphics card is very low level. So, a virtual machine working in this way would have to intercept commands at this low level, convert them back into something high level like OpenGL commands, and then send that back into the host graphics driver stack where it gets converted again into low level commands for the graphics card.
Obviously this would be a lot of complicated code. It is what is done in emulators for modern game consoles, so it’s possible, but you’re restricted to 3D capabilities a few generations before your actual graphics card and machine in terms of performances because there are so many things going on.
But there’s another way. This is what nephele linked: VirGL.
It is part of “virtio”. The idea of virtio is, instead of having virtual machines emulating actual, existing hardware, they can instead emulate “fantasy” hardware that is much simpler because it doesn’t have to be actual hardware. In the case of 3D acceleration, this is a video card where you can pretty much directly send OpenGL commands to it, and it will execute them (by sending them directly to the Linux graphics stack). It is indeed one way to develop other parts of the graphics stack and solve a few of the issues, and then we can write more drivers for other graphics cards once the upper layers of the stack are in place.
We already have writen virtio drivers for various other things. for example, “balloon memory”, which is a way for a virtual machine to dynamically grow or shrink its RAM size as needed at runtime, and give the RAM back to the host system when it is not needed. This means there is no need to allocate several gigabytes of RAM to a virtual machine permanently. Of course, such a thing would not be found in actual hardware, but with virtio, it’s possible.