The ticket for accelerated 3d has been updated 11 days ago with a new suggested direction of using the Linux drm framework, which I wholeheartedly support.
NOTE: Continued work needed on the Haiku implementation of libglvnd and libdrm2 / libdrm (kernel and userland DRI/DRM components), and the RadeonGfx driver.
Mesa 23.1.9 is the baseline target for any Haiku GL review testing. Mesa 24.0.5 is acceptable for bug resolution review.
What, exactly, is holding up hardware 3D rendering on graphics cards? Are there any 100% open-source drivers? Or is there always going to be some “binary blobs” that prevent any open-source OS from taking full advantage of a graphics card?
Has anyone looked into trying to create a custom graphics card via FPGA on a PCIe bus?
AFAIK its the device manager redesign? Which is being worked on to improve multi monitor etc etc… which BeOS didn’t really support.
I mean there is the existing port of the Vulkan driver.
FPGA is a non starter because its far too slow, you’d end up with a GPU even slower than the Nvidia fixed function GPUs RudolfC’s drivers support. In any case software rendering is already much faster than this would be. As an example even on a fast board the GPU you can fit into an average large FPGA is around late 90s performance. Beyond that you are talking about thousand dollar FPGAs… and that would still not get you much further.
Also consider that FPGAs tend to be pretty weak on memory bandwidth (and even then you have to fan that bandwidth out a lot inside the GPU to handle it), while GPUs are at the opposite end of the spectrum with even low end GPUs today having well over 100GB/s bandwidth (note that is big B bytes)
Some binary blobs are cross-platform do not need any knowledge about its contents by device driver. For example Nvidia GSP firmware. Such binary blobs do not cause any limitations to Haiku compared to other OSes.
Ok, just read some interesting stuff about nVidia open-source driver for Turing/Ampere cards and my Asus Zephyrus G laptop uses an nVidia GeForce GTX 1660 Ti (which is a Turing chipset) and found this:
How much can be done with this information? Can we make any/all of this work or is there something that still hogties Haiku?
I already managed to port and run this driver as userland server, but it need integration with app_server modesetting and Mesa NVK Vulkan driver to be actually useful.
I am new to Haiku, and I was kinda surprised when I saw mine (QEMU Virtual Machine) been only at 300+ fps. I mean, 300fps are great, but not for something so simple. Then again, it was OpenGL and not Vulkan, but this thread that explains that there is no GPU acceleration on Haiku gives some sense to my thoughts!
I hope we can have it soon and that Teapot can be at least 1000fps