State of Accelerated OpenGL

I’m looking from time to time at Haiku-OS to see where it is heading. I’m coming from a field working with accelerated GPU programming so I’m especially interested to see how this develops. I though lost track if accelerated OpenGL is still worked on or if there is actually any chance of running (I guess more later than sooner) accelerated OpenGL on stuff like Radeon 7xxx type hardware. Since AMDGPU is coming around with an open structure is development getting picked up again? Or is this not a topic in the near future?

It’s just “Haiku” not “Haiku-OS” (we haven’t been able to buy…)

No, there’s no hardware acceleration at the moment.

The only person who knows enough about the internals of both Haiku and Mesa3D to even attempt such a project is @kallisti5, and he’s too busy for a large number of reasons. However, since DRI3 is now a thing, and it’s mostly window-system-agnostic, the effort required to get something working is much lower. With that being said, there’s still a lot of technical nuance involved, and it would be at least a few hundred hours of work, at the least, I’d guess.

So, not anywhere in the near future, unfortunately.

SO leaving GPU acceleration out of the picture, where is OpenGL standing? 2.x, 3.x, 4.x? Is the mesa package a lot modified for Haiku?

On GCC2, we have Mesa 7.9, which implements OpenGL 2.1. On GCC5, we have Mesa 11.0, which implements OpenGL 4.1. Mesa 7 is modified a lot for GCC2 compatibility, Mesa 11 is mostly out-of-the-box (although there were a number of changes that were upstreamed.)

gcc2 mesa 7.9 is Software Rasterization
gcc5 is stock upstream mesa (sometimes with some minor build fixes) and leverages the Software Pipe renderer (llvm accelerated)

The performance under the newer gcc5 llvm based renderer is higher under multi-cpu workloads… however it is buggier / newer code.

I thnik there are a few more people who would be able to work on this besides kallisti5 (I’ve seen rudolf cornelissen around the IRC channel some months ago…). And, like any other part of the OS, it’s not crazy magical things going on, so people could dig into it and get things going.
Here is a short list of what would be needed:

  • Mesa really wants to use DRI, so our graphics driver would have to somehow expose a similar API (and decide how Mesa gets to it, directly talking to the driver or going through the accelerant or…),
  • BGlView would probably need some rework to work with other things than the software rendering,
  • The video drivers need to be completed to handle the low level part of the work: memory allocation (either using areas, or a dedicated allocator as Linux does if areas turn out to be not flexible enough), managing the GPU command pipe and feeding it the commands, possibly scheduling between different GL contexts if multiple BGlViews are around (unless Mesa or the accelerant can handle that).

When it comes to using the GPU for raw computation and number crunching, we can get all the Mesa/accelerant part out of the way, and only the driver side of the work would be needed (with an appropriate userland plugged onto it).

Unfortunately, everyone has limited time and there are often more urgent things to do. So there hasn’t been much progress on this in the last few years.

I’m mostly interested in newer GCC so mesa 11 sounds fine. That said what if Haiku would go directly for Vulkan instead of OpenGL for high-end rendering leaving OpenGL for regular applications not required to leverage all the power they can get? After all Vulkan is defined quite slim compared to OpenGL burdening the administration work-load onto the application instead into the driver. Might be of actual use in the case of Haiku where the high-end GPU stuff is not yet written into stone.

I’m not sure it changes much for us: all the work on OpenGL is done by Mesa, which already converts things to low-level calls it feeds to the GPU driver. What Vulkan does is removing a large part of this and exposing something closer to the driver interface directly to apps, but we will probably still use Mesa or something similar to achieve it anyway.

The hard part is adding the pipelines to mesa to talk to the hardware. This is where the DRI Hell magic comes in.

GCC2 will never have hardware acceleration… the version of Mesa that will build on gcc2 is really outdated and Mesa has 0 interest in supporting gcc2 in modern code (and I don’t blame them)

I was writing a “hardware rendering pipeline” wrapper once upon a time that can talk to any hardware device (DRI, Haiku, etc) but haven’t done much to it lately. You would have to talk the mesa developers into supporting it which would be a hard sell… they really like DRI even though it is very linux centric. (They say it’s cross platform, but it requires a lot of X-centric stuff… DRI3 was supposed to fix a lot of the Xorg dependencies)

Under Linux. Build under Haiku and it automatically adjusts to the Haiku accelerant interface.

kallisti5@avongluck01:~/Code$ cd librendomatic/ kallisti5@avongluck01:~/Code/librendomatic$ ls docs include mkdocs.yml run_tests SConstruct src tests kallisti5@avongluck01:~/Code/librendomatic$ scons scons: Reading SConscript files ... posix scons: done reading SConscript files. scons: Building targets ... gcc -o src/backend/dri/bo.o -c -g -g -Iinclude -Isrc -Isrc/backend -Isrc/backend/dri -I/usr/include/libdrm src/backend/dri/bo.c gcc -o src/backend/dri/entry.o -c -g -g -Iinclude -Isrc -Isrc/backend -Isrc/backend/dri -I/usr/include/libdrm src/backend/dri/entry.c src/backend/dri/entry.c: In function ‘base_device’: src/backend/dri/entry.c:169:6: warning: #warning DRI: TODO: Get card base address! [-Wcpp] #warning DRI: TODO: Get card base address! ^ gcc -o src/bufferobject.o -c -g -g -Iinclude -Isrc -Isrc/backend -Isrc/backend/dri -I/usr/include/libdrm src/bufferobject.c gcc -o src/util.o -c -g -g -Iinclude -Isrc -Isrc/backend -Isrc/backend/dri -I/usr/include/libdrm src/util.c gcc -o src/rendomatic.o -c -g -g -Iinclude -Isrc -Isrc/backend -Isrc/backend/dri -I/usr/include/libdrm src/rendomatic.c ar rc src/librendomatic.a src/bufferobject.o src/util.o src/rendomatic.o src/backend/dri/entry.o src/backend/dri/bo.o ranlib src/librendomatic.a gcc -o tests/devicebo.o -c -g -g -Iinclude -Isrc -Isrc/backend -I/usr/include/libdrm tests/devicebo.c gcc -o tests/devicebo tests/devicebo.o -Lsrc -ldrm -lrendomatic gcc -o tests/deviceopen.o -c -g -g -Iinclude -Isrc -Isrc/backend -I/usr/include/libdrm tests/deviceopen.c gcc -o tests/deviceopen tests/deviceopen.o -Lsrc -ldrm -lrendomatic scons: done building targets. kallisti5@avongluck01:~/Code/librendomatic$ ./tests/deviceopen librendomatic-trace: rendo_initialize() librendomatic-trace: Using DRI backend. librendomatic-trace: open_device() librendomatic-trace: open_device: Found card /dev/dri/card1 librendomatic-trace: open_device: Found card /dev/dri/card0 librendomatic-trace: connect_device() librendomatic-trace: connect_device: authenticated to DRI2 librendomatic-trace: connect_device: info: nouveau, nVidia Riva/TNT/GeForce/Quadro/Tesla, 20120801 librendomatic-info: context initialization successful. librendomatic-trace: rendo_destroy() librendomatic-trace: close_device() librendomatic-info: context destruction successful.

Implementing DRI wrapper code within the accelerant would be the easier sell to the Mesa developers and a lot less stress… the DRI2 api’s are pretty horrible though.

In theory we could eliminate the DRI2 card-centric code and put it into the accelerant. Then just point Mesa at each card device in /dev/graphics/*

Lots, and lots of work to do… and so little time.

Start with improving / fixing up llvm softpipe rendering under Haiku… the latest Mesa code doesn’t work so well under Haiku anymore and needs some minor patching. Mesa developers are always changing what arguments functions take, and since they use pointers to functions for eveything it creates a lot of code that “compiles but crashes when used”

1 Like

Contributing to Mesa:

  1. Sign up for the mesa ML:
  2. Work on code in your own git repo.
  3. Once you have some patches, git send-email to
  4. Once someone signs off on it, rewrite your commit and stick their “approval” line to the bottom of the commit.
  5. Push the changes (if you have commit access) or ask the approver to if you don’t.

Mesa upstream contains our entire OpenGL kit: (GCC4,5)

We forked the older Mesa for GCC2. We 100% own this fork.


What exactly does this now mean for our case?


First, thank you for the work of getting the radeon_hd driver into as good a shape as it is. My Haiku X64 works much better with it!

Can you also post links to where in the Haiku source the Radeon driver and accelerant are?

It looks like the accelerant is in /src/add-ons/accelerants/radeon_hd, just want to be sure.

I didn’t see a specific radeon_hd driver.

Also, I see the older documentation on making your own video driver, but is there more documentation that better describes the boot loading of the video driver and how the app server /BGL Window use the driver?

Again, thank you.

1 Like

The driver is here:

The app_server loads the accelerant as an add-on and uses the provided API:

Currently, BGLWindow does not use the driver at all, it does the rendering in software and draws on app_server’s frame buffer, using a BDirectWindow.

1 Like

Speaking of BDirectWindow… I’m missing the Haiku counterparts of glXSwapBuffer, glXMakeCurrent and other vital OpenGL functions. I grep-ed all over the headers but found nothing of use. OpenGL requires a current context to draw anything so I’m at a loss where it get’s it from.

You can use BGLView::LockGL, UnlockGL, and SwapBuffers methods:

LockGL stores the context into thread-local storage, then you can use OpenGL calls from the thread that locked the context.

Simple examples of GL apps are the screen savers ( or Haiku3D (

Alternatively, the GCC5 version of Mesa comes with EGL support, so you can use the EGL APIs as well.

That’s not so bad. To deal with Android I had put in EGL support already. But I prefer pure OpenGL if I have the choice. EGL is somewhat “different” than OpenGL… not always in the nicest ways possible.

Concerning BGLView that’s the only way to get it, right? Just so I get this right, important is only BGLView not BDirectWindow? So I could produce a BGLView and place it into any BWindow? That would be kinda neat :smiley:

Thank you! I would like to better understand the screen rendering flow. It seems that Mesa is an extra layer behind a layer. My understanding is Vulkan should remove these layers, and put more on the application developer.

I’m really curious if the accelerant can implement Vulkan and the app server use it directly (or what ever context window). Then OpenGL could be written using the Vulkan driver as the layer and bypass Mesa altogether. It seems the Linux Radeon Vulkan driver is not in Mesa, though the Linux Intel driver is.

Again, Thank you.

Yes, BGLView can work both in “direct” and “indirect” modes. For example, the ScreenSaver preview uses “indirect” rendering, while the screensaver running fullscreen uses direct rendering.

The direct rendering is supposedly faster as it allows to completely bypass app_server and its double buffered rendering, but I’m not sure if that’s the case with the current implementation.

Well, the main problem is the OpenGL API was originally developped in the 1990s. Video cards have changed quite a lot since then and several parts of the OpenGL API are either not appropriate for modern hardware, or not what most apps expect from it anymore. It turns out Mesa (and other GL implementations) have to do a lot of work to keep these old parts working, but they are not used by modern apps anymore.

OpenGL-ES removed some of the old parts, and Vulkan drops everything and starts from scratch.
It would be a lot of work to reimplement OpenGL on top of Vulkan so we are probably not going to do it. Maybe the Mesa guys will do so? Or maybe they will plug Vulkan above their stack, like they do for OpenGL and can also do to some extent with DirectX (yes, Mesa can also provide some DirectX APIs to apps).

For us Haiku developers, it make sense to plug to the backend side of Mesa, and do just that. Then, Mesa can implement whatever API is the current hype above that, and we don’t have to rewrite all our drivers and accelerants each time they change their mind about that.

I’m asking because I support two usage modes: hosted and not hosted. In not-hosted mode my renderer produces a window to do full-screen or whatever with it. In that case I would produce a BDirectWindow and attach a BGLView to it. Hosted mode is though interesting since it allows to “inject” high-end rendering into regular applications. Hosted here means the renderer puts his render window into a provided host window. Under Linux it simply creates an accelerated window as child of the hosted window and be done with it. So here I can only create a BGLView and put it into the provided BWindow. But I can not be sure it is a BDirectWindow. That would be responsibility of the user to create the right top level window. A well enough solution for me.

EDIT: “Hype” is a good word. Vulkan has it’s uses but the benefits are over-hyped. Badly written game engines (I’m looking here at certain AAA developers) do benefit from it but more due to the original design being so butt-hurt it just hurts. For a well developed engine the benefits are marginal.

For OS designers though it’s something else. You can put a Vulkan backend up and running way faster than a full blown OpenGL backend since you are set free of all management hassles. This is then the job of the Vulkan users. To be honest I would not put my money on MESA in terms of Vulkan. MESA is quite the mixed bag… and you never know what kind of explosive sweets are hidden in that bag.