VideoStreams: Media Kit-like 3D hardware acceleration kit for Haiku

Copied from this post:

I made an illustration of my 3D hardware acceleration architecture proposal:

Hardware graphics acceleration architecture

It introduce new native Haiku API for rendered buffer producers and consumers. It works in a way similar to Media Kit. Application that will render 3D graphics with OpenGL or Vulkan will expose BufferProducer interface that can be connected to BufferConsumer interface providers such as screen of compositor that will mix multiple inputs to one output. Connection is represented by SwapChain object that owns rendering buffers and control buffer swapping process. Connected BufferProducer and BufferConsumer may be in separate processes.

OpenGL or Vulkan applications are not required to have a window, it may be connected to another process BufferConsumer or to some offscreen processing BufferConsumer such as video recorder or network transmitter.

One screen-global surface is used for app_server 2D graphics so changes to app_server architecture and increase of resource usage would be minimal. app_server will provide clipping information to compositor by reusing BDirectWindow mechanism.

Applications do not directly talk to a compositor, it can request BufferConsumer interface for BWindow, BView or BScreen instead.

Project source code on GitHub: GitHub - X547/VideoStreams: Media Kit-like 3D hardware acceleration kit for Haiku.

28 Likes

I made first early operational prototype that demonstrates:

  • Producer and consumer in separate processes.
  • Communication is Application Kit and BMessage based. Producer and consumer are BHandler derived.
  • Producer can dynamically connect and disconnect to consumer. Consumer is discovered by scripting.
  • Producer requests swap chain with 2 BBitmap buffers. Swap chain is created and owned on consumer side.
  • Producer clone areas for received swap chain. Producer draw directly on BBitmap memory created by consumer.
  • Buffers are swapped on each producer buffer present.
  • Test animation with moving black rectangle is displayed.

VideoStreams1

29 Likes

Nice!!!

Seems like this could be used to implement the multi-process rendering model needed for WebKit2, even before anything is hardware accelerated.

Also this reminds me of the Rust crate wgpu somewhat, at least the SwapChain naming. Though I only barely started going though this tutorial. Seems like the WebGPU stuff might be useful for Haiku.

9 Likes

How is this related to webkit? the gsoc project for webkit2 already had a working rendered output.

I see in your diagram that you have an OpenGL context that bypasses Vulkan. That seems unrealistic because 3D drivers with high level constructs usually are a superclass of low-level Vulkan drivers. That’s the reason that Fuchsia is going to implement only Vulkan in hardware: the other abstractions exist as libraries on top of it.

OpenGL and Vulkan are graphics rendering API that draw on GPU buffers defined by producer. VideoStreams allows to pass rendered buffers to any process with any rendering API. I see no serious problem to pass buffer rendered by OpenGL as Vulkan texture.

VideoStreams Mesa3D add-on will use private Mesa API to handle GPU buffers.

3 Likes

Ok. I was kind of wondering about that. WebGL is based on OpenGL ES v2 or greater because it works better in multithreaded environments than a central Open GL 3+ context. Mesa supplies handles for Vulkan, OpenGL and OpenGL ES as well anyway.

This looks to be a bit like what virgl does to forward commands out of QEMU and to the host for rendering… except virgl is extremely slow (and one of the reasons a lot of work is being put into zink to run OpenGL on top of Vulkan, which better supports being piped like that, though it still incurs a significant performance loss at present.)

My understanding from the time I spent researching all this is that (1) recreating Mesa drivers is of course not feasible, and (2) the Mesa userland system interacts with the drivers via an ioctl interface that cannot reasonably be put through a pipe and consumed by some other userland process; it has to go to a kernel-mode driver.

So while you may get something like "indirect libGL" X11-style working, where you can pipe OpenGL commands to a “server” that then calls the actual graphics driver, at the end of the day, hooking into Mesa and then calling ioctls is the only real way to go. There are no other open-source graphics stacks to use, and considering how many people are employed full-time on Mesa, I don’t think recreating that is at all feasible.

3 Likes

Rendering is performed in the same process that use OpenGL/Vulkan and provide producer interface. Userland drivers will be loaded to graphics API client process. Consumer will receive raster GPU buffers, not OpenGL drawing commands. Graphics API commands passing is not needed.

2 Likes

This system is currently supposed to be serverless. No central server process is needed. app_server is unrelated to VideoStreams operation, it is just compositor client that provides 2 surfaces: whole screen with 2D graphics and cursor. Compositor is only needed if render on screen, it is not needed at all for offscreen rendering.

2 Likes

If rendering is performed in the same process that uses OpenGL/Vulkan, then I’m not sure what this producer/consumer system is even for… Most GPUs prefer to blit rendered graphics directly to the framebuffer, and incur a major performance hit if you have to send raster buffers through main memory instead of directly to video output.

1 Like

It will cause tearing. 2 swappable screen buffers are needed to avoid tearing.

Buffer are passed in GPU memory without copy between producer and consumer.

3 Likes

Mesa has facilities for buffer management, producer/consumer, vertical sync, and swapping already, as do most modern GPUs. So, I guess I’m still just confused how either this fits in to what Mesa already does, or how Mesa fits into this architecture you are proposing.

1 Like

It is private Mesa API. Some Mesa-independent wrappers are needed. Also Mesa do not implement all buffer management and synchronization, some parts are supposed to be done by OS-specific parts and compositor.

Mesa is used to render graphics to GPU buffer in process that use 3D graphics API. VideoStreams is responsible for passing GPU buffers between processes, accepting buffers from producers, handle dirty regions, synchronize buffer presenting.

  • Mesa: rendering to GPU buffer inside process.
  • VideoStreams: interaction between processes or different producers/consumers inside the same process.

I plan to introduce HGL2 API for OpenGL instead of BGLView-based HGL. HGL2 is based on BufferProducer and it is completely independent from window system and app_server. HGL2 will be probably simpler and easier to maintain than existing HGL. If you want to render to BView, you use BViewBufferConsumer and connect it to your OpenGL producer. If you want to render directly to screen, you connect to ScreenConsumer. If you want to record video, you connect to VideoConsumer etc…

8 Likes

This sounds interesting, but it also sounds like something no other OS presently does, and I’m hesitant to agree that we should try to do something like it before we even have hardware-accelerated graphics in the first place… it could make for some interesting experiments though.

2 Likes

I also note that Vulkan has a very different “swapchain” system than OpenGL does, and it may already take care of most of this through that, yes?

1 Like

Swapchain in Vulkan is an extension (VK_KHR_swapchain). Its operation depends on OS implementation/compositor etc.

Probably only if using Wayland.

3 Likes

We want the best and modern graphics acceleration subsystem without X11-like windowing system and screens legacy. Why not?

I often read posts with developers troubles on Linux when they attempting to use GPU buffers in multiple processes.

10 Likes

OpenGL across processes does not really work, yes. But I think it does work with Vulkan; or at least I was given to understand that it did; and OpenGL is now in “maintenance mode” and most developments are being made in Vulkan anyway.

Plus, an OS-specific API for cross-process buffer management is only useful if it actually gets used. Most big applications we would port – Blender, games, etc. – are not going to use such a feature unless we modify them to (and even then most may not have much use for it…)

It is implemented on Mesa and also by proprietary drivers on Windows. My understanding is that most applications that use Vulkan seriously will make use of it (or potentially even require it in some scenarios.)

Here is a brief slide deck explaining the basics of Vulkan WSI (Window System Integration.) It mentions that at least all the WSI extensions are implemented both by Mesa and by Android drivers, so that indicates this is not at all Wayland-specific: https://xdc2019.x.org/event/5/contributions/313/attachments/414/664/xdc_2019_wsi_layer.pdf

1 Like

In VideoStreams clients explicitly requests (or creates its own if it can) swapchain that is a bit similar to Vulkan: https://github.com/X547/VideoStreams/blob/master/TestProducer/ProducerApp.cpp#L230.

2 Likes