Can you have hardware rendering on an emulated video card?

I ask this (potentially very stupid question) because I assume that one of the main obstacles to hardware rendering is the fact that physical graphics cards are not fully accessible (proprietary), whereas emulated video cards are, since if they were not fully accessible, how could they be properly/fully emulated?

That being said, the ATI Rage 128 Pro in QEMU’s RiscV emulation environment, is an old graphics card… but… being emulated… does that mean we now have full knowledge of it and therefore the ability to execute hardware rendering on it? Agreed, it would be “software” (emulated) hardware rendering (which probably wouldn’t be all that fast, but maybe a little faster than pure software rendering), but the OS and apps wouldn’t know the difference, since that’s what an emulator does… it mimics physical hardware on different hardware.

I see no reason why software implementation of a video card should be faster than a software implementation of a graphics API. In fact, the opposite should be true.

We already have the knowledge to do this.

why? Many Linux drivers for graphics cards are Mit licensed.
(e.g Makefile « amdgpu « amd « drm « gpu « drivers - kernel/git/stable/linux.git - Linux kernel stable tree)
And amd, and Intel, provide detailed documentation to programm their devices.

The problem is manpower to implement drivers… deciding on an architecture, figuring out if porting drivers is viable. etc.

Emulated software rendering is definetely much much slower than just doing what we are now, which is implementing the OpenGL Api ontop of a software.

Funnily enough, there is a project in the other direction, providing hardware acceleration to the vm client, the project is called virgil3d. IIRC It basically forwards opengl callls/vulcan calls to the host machine and lets it’s gpu run them. VirGL — The Mesa 3D Graphics Library latest documentation

That sounds a lot like what QEMU is doing with my Asus Zephyrus G laptop… it’s using the physical WiFi connection (hardware) of my laptop, to provide Networking access in
Haiku RiscV (which is a purely emulated environment). I’m assuming Haiku RiscV must have the drivers that match my Asus WiFi hardware, but QEMU is the “tunnel” by which it can access the physical hardware, correct?

usually not. Qemu emulates some network card, or maybe just a pure tunnel. And just passes it to the OS in severall possible configurations (tunnel, NAT, etc)

While it is possible to pass a Device (per PCI) completely to the VM that is (unless you are developing) usually not done. Since you would then not have any network connectivity in the Host left over the wifi.

In the case of network cards, it will be something like this:

Application → Haiku network stack → network driver → fake, emulated network card → ethernet packets injected by qemu into Linux in some way → linux network stack → linux Wifi interface driver → finally the packet is sent into the actual network

This is a relatively simple case, because network packets are designed to easily go from machine to machine, and a virtual machine is not very different from a physical machines routing its packets through another one.

For accelerated 3D rendering? That’s a lot more complex. Graphics cards are complicated things. The protocol between the driver and the graphics card is very low level. So, a virtual machine working in this way would have to intercept commands at this low level, convert them back into something high level like OpenGL commands, and then send that back into the host graphics driver stack where it gets converted again into low level commands for the graphics card.

Obviously this would be a lot of complicated code. It is what is done in emulators for modern game consoles, so it’s possible, but you’re restricted to 3D capabilities a few generations before your actual graphics card and machine in terms of performances because there are so many things going on.

But there’s another way. This is what nephele linked: VirGL.

It is part of “virtio”. The idea of virtio is, instead of having virtual machines emulating actual, existing hardware, they can instead emulate “fantasy” hardware that is much simpler because it doesn’t have to be actual hardware. In the case of 3D acceleration, this is a video card where you can pretty much directly send OpenGL commands to it, and it will execute them (by sending them directly to the Linux graphics stack). It is indeed one way to develop other parts of the graphics stack and solve a few of the issues, and then we can write more drivers for other graphics cards once the upper layers of the stack are in place.

We already have writen virtio drivers for various other things. for example, “balloon memory”, which is a way for a virtual machine to dynamically grow or shrink its RAM size as needed at runtime, and give the RAM back to the host system when it is not needed. This means there is no need to allocate several gigabytes of RAM to a virtual machine permanently. Of course, such a thing would not be found in actual hardware, but with virtio, it’s possible.

4 Likes

It’s also worth mentioning VMWare virtual GPU device (aka “VMWare SVGA II” or “VMWare SVGA v3” in the latest versions) which probably served as inspiration for VirGL authors to create an open-source alternative. While the VMWare’s implementation of the virtual GPU is closed-source, they open-sourced its Linux device driver, and published the GPU documentation which led to device to be reimplemented and open-sourced by VirtualBox (and to some extent by QEMU).

The device supports 3D acceleration which enthusiasts made work on something as old as Windows 98 :exploding_head:

1 Like

The 2 biggest obstacles for gpu drivers is memory management and the jit compiler. Both are difficult to work with. Fortunately the jit situation is much better than 10-12 yrs ago with AMD and Intel offering open source variants. You need those compilers to take your graphic stack output and turn it into GPU machine code. That’s a lot of work but the Vulcan driver stack in linux streamlined that a lot. Next up is handling the card, card memory, card commands. AMD uses atombios " or did " for most of this. But getting the ring buffer, screen control etc all handed off is a difficult task.

FPGA implementing a gpu core would be a way to get a open source GPU iirc a group already did a primitive version.

The big dog in the hunt isn’t the hardware, it’s all software afaict.

The effective way to emulate a video/gpu would be to implement it on a fpga, probably could get pretty good performance

Common misunderstanding. At least AMD do not provide any kind of documentation of low-level programming of their past 2010 year GPUs. Driver source code is available, but undocumented: there are no corresponding documentation or comments in source code (for example there are no documentation what “RLC” is and how to program it). Driver code was written by AMD employees referencing documentation under NDA.

1 Like

No. Basic GPU memory management can be trivially implemented, shader compiler already exists in Mesa 3D driver collection and can be trivially ported to Haiku. Biggest hurdle is low-level GPU programming that is usually done by kernel drivers and is mostly undocumented. Incorrect GPU programming cause very obscure troubles such as whole computer freeze that are hard to debug. Unlike userland OpenGL/Vulkan drivers, kernel drivers are hard to port, it are usually written in very importable code and actively use features of specific kernel such as Linux. One good rare case of portable kernel GPU driver is recently opensourced Nvidia kernel GPU driver for Turing+ series. It can be compiled and run on Haiku with minor effort. But Nvidia not not opensourced userland driver part yet and Mesa 3D userland driver use incompatible API (need porting work to match ioctl API incompatibilities).

1 Like

Could NVK + Zink be useful for Haiku in supporting NVIDIA GPUs?

Yes, Nvidia GPU Open + NVK + Zink combination should work in theory after porting NVK to Nvidia RMAPI ioctl’s.

1 Like

Mesa exports from opengl a IR language, that the lower level jit has to dispatch, packet and memory manage.

Nothing i said was factually inaccurate

Mesa export final machine code to be executed by GPU in allocated GPU memory block. Kernel/server GPU driver do not need to know anything about GPU instructions passed from OpenGL/Vulkan driver. It just schedule execution of command buffer and notify completion.