Can you have hardware rendering on an emulated video card?

I ask this (potentially very stupid question) because I assume that one of the main obstacles to hardware rendering is the fact that physical graphics cards are not fully accessible (proprietary), whereas emulated video cards are, since if they were not fully accessible, how could they be properly/fully emulated?

That being said, the ATI Rage 128 Pro in QEMU’s RiscV emulation environment, is an old graphics card… but… being emulated… does that mean we now have full knowledge of it and therefore the ability to execute hardware rendering on it? Agreed, it would be “software” (emulated) hardware rendering (which probably wouldn’t be all that fast, but maybe a little faster than pure software rendering), but the OS and apps wouldn’t know the difference, since that’s what an emulator does… it mimics physical hardware on different hardware.

I see no reason why software implementation of a video card should be faster than a software implementation of a graphics API. In fact, the opposite should be true.

1 Like

We already have the knowledge to do this.

why? Many Linux drivers for graphics cards are Mit licensed.
(e.g Makefile « amdgpu « amd « drm « gpu « drivers - kernel/git/stable/linux.git - Linux kernel stable tree)
And amd, and Intel, provide detailed documentation to programm their devices.

The problem is manpower to implement drivers… deciding on an architecture, figuring out if porting drivers is viable. etc.

Emulated software rendering is definetely much much slower than just doing what we are now, which is implementing the OpenGL Api ontop of a software.

Funnily enough, there is a project in the other direction, providing hardware acceleration to the vm client, the project is called virgil3d. IIRC It basically forwards opengl callls/vulcan calls to the host machine and lets it’s gpu run them. VirGL — The Mesa 3D Graphics Library latest documentation

1 Like

That sounds a lot like what QEMU is doing with my Asus Zephyrus G laptop… it’s using the physical WiFi connection (hardware) of my laptop, to provide Networking access in
Haiku RiscV (which is a purely emulated environment). I’m assuming Haiku RiscV must have the drivers that match my Asus WiFi hardware, but QEMU is the “tunnel” by which it can access the physical hardware, correct?

usually not. Qemu emulates some network card, or maybe just a pure tunnel. And just passes it to the OS in severall possible configurations (tunnel, NAT, etc)

While it is possible to pass a Device (per PCI) completely to the VM that is (unless you are developing) usually not done. Since you would then not have any network connectivity in the Host left over the wifi.

In the case of network cards, it will be something like this:

Application → Haiku network stack → network driver → fake, emulated network card → ethernet packets injected by qemu into Linux in some way → linux network stack → linux Wifi interface driver → finally the packet is sent into the actual network

This is a relatively simple case, because network packets are designed to easily go from machine to machine, and a virtual machine is not very different from a physical machines routing its packets through another one.

For accelerated 3D rendering? That’s a lot more complex. Graphics cards are complicated things. The protocol between the driver and the graphics card is very low level. So, a virtual machine working in this way would have to intercept commands at this low level, convert them back into something high level like OpenGL commands, and then send that back into the host graphics driver stack where it gets converted again into low level commands for the graphics card.

Obviously this would be a lot of complicated code. It is what is done in emulators for modern game consoles, so it’s possible, but you’re restricted to 3D capabilities a few generations before your actual graphics card and machine in terms of performances because there are so many things going on.

But there’s another way. This is what nephele linked: VirGL.

It is part of “virtio”. The idea of virtio is, instead of having virtual machines emulating actual, existing hardware, they can instead emulate “fantasy” hardware that is much simpler because it doesn’t have to be actual hardware. In the case of 3D acceleration, this is a video card where you can pretty much directly send OpenGL commands to it, and it will execute them (by sending them directly to the Linux graphics stack). It is indeed one way to develop other parts of the graphics stack and solve a few of the issues, and then we can write more drivers for other graphics cards once the upper layers of the stack are in place.

We already have writen virtio drivers for various other things. for example, “balloon memory”, which is a way for a virtual machine to dynamically grow or shrink its RAM size as needed at runtime, and give the RAM back to the host system when it is not needed. This means there is no need to allocate several gigabytes of RAM to a virtual machine permanently. Of course, such a thing would not be found in actual hardware, but with virtio, it’s possible.

6 Likes

It’s also worth mentioning VMWare virtual GPU device (aka “VMWare SVGA II” or “VMWare SVGA v3” in the latest versions) which probably served as inspiration for VirGL authors to create an open-source alternative. While the VMWare’s implementation of the virtual GPU is closed-source, they open-sourced its Linux device driver, and published the GPU documentation which led to device to be reimplemented and open-sourced by VirtualBox (and to some extent by QEMU).

The device supports 3D acceleration which enthusiasts made work on something as old as Windows 98 :exploding_head:

1 Like

The 2 biggest obstacles for gpu drivers is memory management and the jit compiler. Both are difficult to work with. Fortunately the jit situation is much better than 10-12 yrs ago with AMD and Intel offering open source variants. You need those compilers to take your graphic stack output and turn it into GPU machine code. That’s a lot of work but the Vulcan driver stack in linux streamlined that a lot. Next up is handling the card, card memory, card commands. AMD uses atombios " or did " for most of this. But getting the ring buffer, screen control etc all handed off is a difficult task.

FPGA implementing a gpu core would be a way to get a open source GPU iirc a group already did a primitive version.

The big dog in the hunt isn’t the hardware, it’s all software afaict.

The effective way to emulate a video/gpu would be to implement it on a fpga, probably could get pretty good performance

Common misunderstanding. At least AMD do not provide any kind of documentation of low-level programming of their past 2010 year GPUs. Driver source code is available, but undocumented: there are no corresponding documentation or comments in source code (for example there are no documentation what “RLC” is and how to program it). Driver code was written by AMD employees referencing documentation under NDA.

1 Like

No. Basic GPU memory management can be trivially implemented, shader compiler already exists in Mesa 3D driver collection and can be trivially ported to Haiku. Biggest hurdle is low-level GPU programming that is usually done by kernel drivers and is mostly undocumented. Incorrect GPU programming cause very obscure troubles such as whole computer freeze that are hard to debug. Unlike userland OpenGL/Vulkan drivers, kernel drivers are hard to port, it are usually written in very importable code and actively use features of specific kernel such as Linux. One good rare case of portable kernel GPU driver is recently opensourced Nvidia kernel GPU driver for Turing+ series. It can be compiled and run on Haiku with minor effort. But Nvidia not not opensourced userland driver part yet and Mesa 3D userland driver use incompatible API (need porting work to match ioctl API incompatibilities).

2 Likes

Could NVK + Zink be useful for Haiku in supporting NVIDIA GPUs?

Yes, Nvidia GPU Open + NVK + Zink combination should work in theory after porting NVK to Nvidia RMAPI ioctl’s.

3 Likes

Mesa exports from opengl a IR language, that the lower level jit has to dispatch, packet and memory manage.

Nothing i said was factually inaccurate

Mesa export final machine code to be executed by GPU in allocated GPU memory block. Kernel/server GPU driver do not need to know anything about GPU instructions passed from OpenGL/Vulkan driver. It just schedule execution of command buffer and notify completion.

2 Likes

Mesa must have radically changed their opengl rendering pipeline since i last looked under the hood .

It used to be

Mesa-driver-gpu and mesa would output a intermediate language for the driver. Hence you could have the mesa opengl 3d api ontop of a Nvidia or amd binary blob driver.

Of more interest, I’m not done yet, but I’m investigating the kernel api differences between linux and haiku. It might make sense to add the relevant drm and kvm functions to the haiku kernel where they diverge or extend the haiku kernel api where required. this might allow for a easier path forward to getting the drivers working. I’m using gpt4 pro to grok the drivers and code. It’s a lot of code and I’m short on time. In some cases just adding or transpiling linux drivers might be a better path.

I won’t have time to really work on this till after the summer, but feel free to jump in.

The upside, if the linux kernel apis drivers are using are added to haiku, it should simplify porting drivers “in theory” at least. I’m not talking about a wrapper or compatibility layer. It’s kind of inescapable anyway, the software has to obey the hardware, and in that haiku and linux are the same. The hardware drives the software architecture and there’s nothing tge software can do about that at all.

That must just be a miss understanding it’s never been that way. All of that intermetate stage occurs outside the kernel driver in userland…

Probably the confusion comes from the kernel driver for AMD can be talked to by the blob or Mesa… the entire compiler is always in userland though regardless, the only thing the kernel driver ever sees is a binary GPU programs and data.

The complexity of GPU drivers is something that should not be just reinvented … becuase you end up being 10 years out of date before anyone gets time to work on it. So any time Haiku can just add the APIs and adapt it’s expectations to what the existing drivers provide… its a win, it won’t be perfect but it can work. And code that works is the best kind. Another upside is most of the graphics subsystem in Linux is dual licensed MIT.

Continuing the discussion from Can you have hardware rendering on an emulated video card?:

As for GL dispatch, LLVM IR/assembly, NIR, Winsys, and state tracking… don’t get too lost in that… seeing the trees versus forest situation…

actually MESA 3d opengl was always a just a OpenGl api pathway to the driver, until the AMD opensource efforts spearheaded by John Bridgman at AMD kicked off in the early 20teens. MESA 3d was never really a driver stack until that effort got underway in earnest and Vulkan came along early into the process to simplify it. That’s why there is a MESA vulkan subsystem. Now I haven’t had time to be involved or follow along on the structural changes, but that was the original intended design.

https://www.mesa3d.org/

the driver is part of X.org

https://www.x.org/wiki/RadeonFeature/

BTW I still think adding the linux driver kernel API’s is the best long term strategy. I’m not saying make it binary compatible but making it into a compile type of solution is best , particularly since the current mesa port should be able to handle the above layers as it stands " I am not sure on the current MESA 3d api level. I would personally target vulkan and mesa, I think vulkan is already available and in a working state of some sort.

I still don’t know if I will have time after the summer to cut a path to the api additions but I would imagine the kernel devs could probably do it far faster than I could.

Maybe he referencing X11 OpenGL commands transfer protocol so actual OpenGL driver is operating in X11 server? That is quite old technology that do not support modern OpenGL and is rarely used today.

I think this is probably a miss understanding… even really really old GPU drivers like for the mach64 Mesa generates the code for the GPU command stream in userspace… those drivers I think may be dropped from the kernel now as there was no way to secure them due to the way you had to write to the card but I’m not certain of that.

As far as I know there are zero GPU kernel drivers that generate 2d or 3d command streams even for GPUs with fixed function APIs. And certainly none of them do this for GPUs have programmable ISAs.

Even take for example NetBSD drivers for very very old framebuffers do most of the work in userspace… im the xorg userspace drivers. Even for the 2d stuff… what happens in kernel mode is just modesetting and buffer management.

Windows nor Linux has done what you claim for 25+ years… at least. XDDM and WDDM don’t do this, Win9x might have done this… for some cards but then we are talking about DOS based OSs.