I mean… that still doesn’t mean anything major is being done in kernel like he described?
Also… mesa had hardware accelerated drivers before ATI/AMD started contributing. Mesa not being a functional driver prior to that isn’t true, it was reverse engineered for older GPUs like the r200 series and the documentation was released later. Those are the classic DRI drivers. And yeah those drivers have a less secure kernel interface, but they aren’t running any of the GL API in the kernel… its just getting sent commands much like the more modern drivers 99% of the heavy lifting the driver does is still in userspace.
The kernel side controls the ring buffer, memory management, frame sync, card clock and setup of shaders, the IR layer takes the opem gl/vulkan ir pipeline, the vulkam or mesa api process api calls, code to the ir layer “Vulcan is almost bare metal” and compiles in a jit for the card.
Api → userland ir compiler → kernel driver
Kernrl driver does communicate back to userland IR compiler etc for sync etc
.afaict x512 got the userland side working. What’s lacking afaik is the kernel side code for AMD atombios to work. Which is a whole other topic
Compiling the shaders isn’t that difficult, managing the card resiurces is. There’s a lot of communication and programming processes going on between the jit/driver and the hardware layer.
This is what I am reviewing right now, the question is , how much of this requires new kernel code or how much can be just brought straight in. At the end of the day, the hardware drives the design of the OS at this layer, we can pontificate on the software all day but the hardware is the end of the road here. All operating systems must basically obey the hardware and implement a relatively similar design. Why redesign this when the work has already been done and it might not be hard to add those facilities to haiku instead of crafting a whole rework of the existing code base just to get back to this square.