I made an illustration of my 3D hardware acceleration architecture proposal:
It introduce new native Haiku API for rendered buffer producers and consumers. It works in a way similar to Media Kit. Application that will render 3D graphics with OpenGL or Vulkan will expose BufferProducer interface that can be connected to BufferConsumer interface providers such as screen of compositor that will mix multiple inputs to one output. Connection is represented by SwapChain object that owns rendering buffers and control buffer swapping process. Connected BufferProducer and BufferConsumer may be in separate processes.
OpenGL or Vulkan applications are not required to have a window, it may be connected to another process BufferConsumer or to some offscreen processing BufferConsumer such as video recorder or network transmitter.
One screen-global surface is used for app_server 2D graphics so changes to app_server architecture and increase of resource usage would be minimal. app_server will provide clipping information to compositor by reusing BDirectWindow mechanism.
Applications do not directly talk to a compositor, it can request BufferConsumer interface for BWindow, BView or BScreen instead.
Seems like this could be used to implement the multi-process rendering model needed for WebKit2, even before anything is hardware accelerated.
Also this reminds me of the Rust crate wgpu somewhat, at least the SwapChain naming. Though I only barely started going though this tutorial. Seems like the WebGPU stuff might be useful for Haiku.
I see in your diagram that you have an OpenGL context that bypasses Vulkan. That seems unrealistic because 3D drivers with high level constructs usually are a superclass of low-level Vulkan drivers. That’s the reason that Fuchsia is going to implement only Vulkan in hardware: the other abstractions exist as libraries on top of it.
OpenGL and Vulkan are graphics rendering API that draw on GPU buffers defined by producer. VideoStreams allows to pass rendered buffers to any process with any rendering API. I see no serious problem to pass buffer rendered by OpenGL as Vulkan texture.
VideoStreams Mesa3D add-on will use private Mesa API to handle GPU buffers.
Ok. I was kind of wondering about that. WebGL is based on OpenGL ES v2 or greater because it works better in multithreaded environments than a central Open GL 3+ context. Mesa supplies handles for Vulkan, OpenGL and OpenGL ES as well anyway.
This looks to be a bit like what virgl does to forward commands out of QEMU and to the host for rendering… except virgl is extremely slow (and one of the reasons a lot of work is being put into zink to run OpenGL on top of Vulkan, which better supports being piped like that, though it still incurs a significant performance loss at present.)
My understanding from the time I spent researching all this is that (1) recreating Mesa drivers is of course not feasible, and (2) the Mesa userland system interacts with the drivers via an ioctl interface that cannot reasonably be put through a pipe and consumed by some other userland process; it has to go to a kernel-mode driver.
So while you may get something like "indirect libGL" X11-style working, where you can pipe OpenGL commands to a “server” that then calls the actual graphics driver, at the end of the day, hooking into Mesa and then calling ioctls is the only real way to go. There are no other open-source graphics stacks to use, and considering how many people are employed full-time on Mesa, I don’t think recreating that is at all feasible.
Rendering is performed in the same process that use OpenGL/Vulkan and provide producer interface. Userland drivers will be loaded to graphics API client process. Consumer will receive raster GPU buffers, not OpenGL drawing commands. Graphics API commands passing is not needed.
This system is currently supposed to be serverless. No central server process is needed. app_server is unrelated to VideoStreams operation, it is just compositor client that provides 2 surfaces: whole screen with 2D graphics and cursor. Compositor is only needed if render on screen, it is not needed at all for offscreen rendering.
If rendering is performed in the same process that uses OpenGL/Vulkan, then I’m not sure what this producer/consumer system is even for… Most GPUs prefer to blit rendered graphics directly to the framebuffer, and incur a major performance hit if you have to send raster buffers through main memory instead of directly to video output.
Mesa has facilities for buffer management, producer/consumer, vertical sync, and swapping already, as do most modern GPUs. So, I guess I’m still just confused how either this fits in to what Mesa already does, or how Mesa fits into this architecture you are proposing.
It is private Mesa API. Some Mesa-independent wrappers are needed. Also Mesa do not implement all buffer management and synchronization, some parts are supposed to be done by OS-specific parts and compositor.
Mesa is used to render graphics to GPU buffer in process that use 3D graphics API. VideoStreams is responsible for passing GPU buffers between processes, accepting buffers from producers, handle dirty regions, synchronize buffer presenting.
Mesa: rendering to GPU buffer inside process.
VideoStreams: interaction between processes or different producers/consumers inside the same process.
I plan to introduce HGL2 API for OpenGL instead of BGLView-based HGL. HGL2 is based on BufferProducer and it is completely independent from window system and app_server. HGL2 will be probably simpler and easier to maintain than existing HGL. If you want to render to BView, you use BViewBufferConsumer and connect it to your OpenGL producer. If you want to render directly to screen, you connect to ScreenConsumer. If you want to record video, you connect to VideoConsumer etc…
This sounds interesting, but it also sounds like something no other OS presently does, and I’m hesitant to agree that we should try to do something like it before we even have hardware-accelerated graphics in the first place… it could make for some interesting experiments though.
OpenGL across processes does not really work, yes. But I think it does work with Vulkan; or at least I was given to understand that it did; and OpenGL is now in “maintenance mode” and most developments are being made in Vulkan anyway.
Plus, an OS-specific API for cross-process buffer management is only useful if it actually gets used. Most big applications we would port – Blender, games, etc. – are not going to use such a feature unless we modify them to (and even then most may not have much use for it…)
It is implemented on Mesa and also by proprietary drivers on Windows. My understanding is that most applications that use Vulkan seriously will make use of it (or potentially even require it in some scenarios.)