Libdrm now officially supports 2 operating systems upstream!

This is kind of a big deal since previously only Linux has been supported by upstream and now both Linux and FreeBSD are supported. This hopefully leads to a more stable libdrm also.

It would be interesting if libdrm support for Haiku could be directly added to libdrm instead of as a wrapper or added support on the Haiku end of things. Developments like this enhance the likelihood of Haiku getting 3d drivers someday.


The Haiku work on Mesa is already upstreamed. Surely the libdrm part will, too, when we get to that. But the kernel side of DRM we still need to implement on our own. That’s the big limitation of the whole gallium/drm thing: it turns out you still need quite a lot of work done in the kernel-side driver.


I don’t know where you are getting this from; even in the pull request there is very blatantly support for DragonFlyBSD in the code, also (note the __DragonFlyBSD__ preprocessor macros.)

All the changes are behind #ifdef __FreeBSD__ or equivalent thereof save one, so as far as I can tell, it won’t affect Linux or other OSes at all…

libdrm is mostly just a bunch of wrappers around ioctl; in fact, I’ve been told a lot of drivers inside Mesa bypass libdrm and just call the ioctls themselves in a lot of cases. It’s basically “portable” in that sense already, so long as there are devices that actually expose those ioctls…

No, it really does not, and I think I’ve explained that before.

As I have stated many times before: gaining support for graphics acceleration on Haiku requires:

  1. porting the kernel DRI drivers from Linux (as the *BSDs did, none wrote their own)
  2. modifying app_server to use these (probably requires a significant overhaul of the accelerant interface as it is not designed for this kind of driver)
  3. overhauling our Mesa port to utilize the DRI drivers (and modifying app_server/accelerants further to support DRI authentication, etc.)

I still have a standing offer to work on #1, if someone else with the requisite knowledge will tackle #2 and #3. Nobody has yet taken me up on that offer, so, I haven’t put very much time in to #1, as the KMS-DRM/DRI drivers are practically useless without the right userland components.

There is basically nothing upstream can do here that will help us (besides, of course, actually porting these things to Haiku.) Obviously FreeBSD, OpenBSD, etc. already have ports of libdrm, and it does not really make any difference at all to us whether they are in-tree or out-of-tree.

1 Like

There will be no harm if only kernelland parts will be ported. Someone else can do userland work later, no need to do both simultaneously. DRM driver can be used for modesetting on first steps.

1 Like

It can’t without an accompanying accelerant add-on in app_server to communicate with it, which will require refactoring of app_server and also some of the Kits, as accelerants can and are loaded by more than just app_server for direct control over certain things.

Changing accelerant interface is not needed for framebuffer modesetting accelerant module. It will just call DRM module ioctl interface.

Example of setting framebuffer with DRM ioctl’s:

@waddlesplash behave… :stuck_out_tongue: (below edited I don’t want people to think badly of you or your work, but I’m still a bit annoyed).

These are patches that were sitting in the FreeBSD ports tree period.

The fact that FreeBSD is maintaining thier code in libdrm upstream now does mean that it will remain more stable with less wild linux changes, at least it is more likely anyway…


From my knowledge of intel graphics card, I’d say it would be possible to keep our modesetting driver and have a completely separate driver for the DRM/3D acceleration part. Which makes sense, after all, your 3D acceleration may be rendering things off-screen. There will be some adjustments (for example the code that allocates the framebuffer in the old-driver side should make sure that memory is exclusive to whatever the 3D part allocates), but I don’t think there would be a lot of interference.

I still don’t see what you would change in app_server even if you used the linux driver kernel-side to do the modesetting. After all, the app_server only really calls the modesetting code, the accelerant interface isn’t even used for anything else currently (and few apps even need or use BWindowScreen to call the hooks directly). No hardware cursor, no scrolling, no blitting, … ok, maybe you need the hook to set a color palette for the 256 color video modes, but that’s something we can also disable while we experiment with 3D acceleration.

So I’d say we could get most of the 3D part running without touching most of the existing app_server code at all.


The intel_extreme accelerant allocates the primary command ring buffer and uses it for V-sync, so at least that would have to be rewritten to either not use the primary ring. And that is just off the top of my head, from glancing at this months ago.

DRM requires one process (and only one) to be the “DRM Master”, which does mode-setting and other modifications of core state. This would require a significant refactor of the Kits and the accelerant system, as I mentioned, to send all their operations as commands to app_server instead of loading the accelerant into the application and invoking it to change modes and use a new framebuffer directly.

One of the Mesa developers also took a look into the accelerant interface in general and noted some things that would have to be changed, but I can’t remember what their analysis was in detail, I would have to go back and look. They thought it might actually be easier to just drop the accelerant interface entirely and write a new one; however this was in the context of DRM acceleration and not just modesetting.

Mode-setting maybe, but we definitely need to modify app_server (or create a new display_server.) And also, if we don’t want the DRM code in the kernel but in userland as previously mentioned, then we are really in uncharted territory.

1 Like

Yes, I don’t dispute that; I am just disputing that this has any relevance to us (again, see the code itself.)

Again, libdrm is mostly insignificant in the grand scheme of things, I think kallisti5 (with some assistance from me) got it to build under Haiku a while back. It’s relatively easy to “port”.

I am not being “contrarian” here just for the sake of it. I’ve actually done a bunch of research, read a chunk of the DRM driver code, talked to the Mesa developers, talked to the DragonFlyBSD developer who maintains their DRM port, and actually started down one path of attempting to compile it for Haiku (using the FreeBSD compatibility layer, but this proved to be far too hacky as FreeBSD has another compatibility on top of that, so I archived the code and abandoned that attempt).

Huh? I very clearly stated that I do not intend to work on this by myself; I know very little about Mesa internals and trying to learn that, by myself, at the same time as modifying app_server and porting new drivers does not sound very fun. So there are better things I’ll spend my time on, until I have a collaborator, at least.

I also don’t know what you mean by “squatting.” This is open source, I am stopping nobody from doing anything; in fact I’m inviting whoever has the requisite knowledge to work with me (or I with them, if they know more than I about the DRM drivers, too.)

How am I deterring anyone?

All I am doing in these replies is explaining, technically, what has to be done to port the DRM acceleration stack. It is of course possible that I am wrong, but I have read a lot about this and talked to quite a number of knowledgeable people about it, so I am probably not “that” wrong.

Nowhere in here have I said “this is impossible” / “this can’t be done” / “nobody should do this” / etc. or other deterring statements. I have said “this particular news item is not relevant to Haiku porting the DRM stack”, and explained why. That’s not deterrence…


It uses interrupts for vsync (wait_for_retrace waits for a semapthore that is released by the interrupt), as far as I know this does not go through the ring buffer. It may currently use it for knowing when to upload the framebuffer from RAM to VRAM, but there is tearing, so I don’t see why we would bother to synchronize that to the vsync. Or if we’re serious about that, we should be doing triple buffer (one in RAM, two in VRAM) so we can upload to the currently not displayed buffer and flip cleanly on vsync).

But it still seems possible to do a lot of DRM things without ever touching app_server at all (or even removing it completely from the image while you test your 3D driver with some test app, possibly using EGL or whatever). On the other hand, it seems quite impossible to have a drm-based app_server accelerant interface when there is no DRM driver to plug it to.

The way I see it, the app_server would still use the accelerant interface (just for modesetting) and apps would use drm (with a drm_server if that’s needed) to do 3d stuff. I don’t see why the framebuffer driver and the 3D acceleration have to be in the same driver at all.

mode setting is the only thing app_server does with the graphics card currently, really. It’s then all drawing to a buffer in RAM and copying that to VRAM, and that’s about it. We may even unplug the part where it copies to vram and have that handled elsewhere.

1 Like

No, it does: QueueCommands is the ring, and that is even how blitting occurs it appears. (Elsewhere, the command-posting APIs are used to acquire Vsync, indeed.)

The DRM Master is what has to do the modesetting, among other things. I guess we could put this in a different server (actually for multiuser that may not be a bad idea, then there would be more than one app_server for each logged-in user… hmm.)

This can be done without modifying accelerant API. First instance of accelerant can create port and secondary instances can connect to port created by first instance to communicate. First instance will manage global state and secondary instances will send commands to port created by first instance to access global state.

Haiku is single user system, multi-user desktop system is not needed nowadays. Just let each user have separate PC. Even if multi-user will be implemented, there are no need in many app_server instances and other OS don’t do that (there are one win32k.sys instance per system in Windows).

I think that you are trying to replicate Linux design. It looks like cargo cult for me, there is no need to do graphics design in same way as in Linux. If you ask Linux developers it is not surprising that they will recommend Linux way.

Sounds like a hack. Why not just fix the accelerant API instead of spending that time?

This is also just one hurdle of many, anyway.

We have for a long time planned to implement multi-user, and already have it at the kernel/filesystem level. If nothing else, we should run applications as non-root for privilege separation. But this is a separate debate.

Each user will have their own “desktop” with windows, etc. in case 2 users are signed in at once, right? Especially considering app_server does a lot of drawing server-side, it’s not just a “window server” like Wayland, it seems to make much more sense to have an instance of app_server for each user.

This is what Linux does with X11 (and I think also with Wayland.) I’m not sure why you are mentioning Windows as an example here. But we are not either and do not have to copy their designs, if a different one makes more sense one way or another.

There is only one set of functional open-source 3D-acceleration graphics drivers: Linux KMS-DRM ones. Every other open-source operating system, including all (Free/Net/Open/DragonFly) the BSDs, as well as other niche OSes such as Aros or the like, port Linux’s drivers instead of writing their own.

FreeBSD, for a while, had a partial rewrite of these drivers to be more FreeBSD native (even including code formatting and API usage), but eventually they abandoned that and wrote a compatibility layer, too. Google’s Fuchsia/Zircon had their own, but I think these were only for select chips of select vendors, and, well, we are not Google anyway. MorphOS supposedly has one for ancient Radeon cards, but it is closed source.

The fact of the matter is that each of these drivers is massive. The AMD KMS-DRM driver alone, for instance, is over a million lines by itself; and the same is true of Intel, Nouveau, etc. The entirety of Haiku is “only” ~3.5 million lines. We are talking about individual drivers that are nearly as large as the OS itself.

There is no possible way we can replicate these from scratch, on our own, in our spare time, to the level of quality and performance that the paid developers (most of whom work for the companies they are writing drivers for the hardware of!) of the Linux kernel are.


Haiku have no enough manpower to do massive changes. In order to achieve actual goal, small iterative steps with actual result should be done.

I am not telling to implement completely different graphics system, I am telling that graphics components from Linux can be used in different configuration than in Linux. For example modules can be loaded into different processes than in Linux (port based idea or drm_server suggested by pulkomandy) or wrapper layers can be used.

app_server already supports multiple desktops by using remote desktop, you can try HTML5 client: html5_remote_desktop « tools « src - haiku - Haiku's main repository. It is not complete as it do not start Tracker/Deskbar etc., but it could be improved. I see no need in multiple app_server’s.

The Haiku project has always been uncompromising with our principles and philosophies. This is why it has taken Haiku so long to get to where we are now; but it may also be why the project is still alive and well. Plenty of other niche OSes have fallen by the wayside and we have not, after all.

Doing this specific thing in a “hacky” way, as you describe here, would take maybe 30% of the time of doing the proper refactor. But then, why not just spend the extra time and do the refactor, which you will have to do anyway? So then you save time by taking the long (but correct) route first.

We would then be in completely uncharted territory. Having read some of the driver code, I am not sure this is actually possible at all, as the drivers themselves do a lot of, well “drivery” things like modifying interrupt paths and the like. Probably it is possible in theory to have a kernel/userland split, but Haiku was not designed as a microkernel and does not have great APIs for working with hardware outside the kernel.

That is still for only one user. The applications running under that user still can talk to the applications on the “main” desktop, and share certain kinds of buffers. Adding the right privilege checks to app_server will be very difficult if not actually impossible here.

It’s also worth noting that app_server has all bitmaps and other resources from all running apps mapped into it. So on 32-bit systems, having lots of users would mean occasionally having errors or apps fail to start because app_server is out of address space due to so much usage. Wouldn’t it just be simpler to run multiple app_servers?

That’s simple… it would exist by now probably.

After all nothing would prevent you from doing it properly afterwards. I’n fact I would imagine it would be a major confidence boost in people watching the project for even a hacky solution… probably enough to take someone full time to get it right.

Linux didn’t grow a 3d driver stack overnight… its been through at least 3 major redesigns and several minor ones.

1 Like

First I don’t think that using ports or drm_server for userland DRM master code is hacky. Second it is wrong to think about total work size. If you do not do anything there will be no progress at all compared to make something working and do refactor later. It will require more work but take less time because there will be less time waste on waiting “someone else with the requisite knowledge will tackle #2 and #3” or something similar.

I was talking about userland part, not kernelland. It is fine to run DRM as kernel module. But userland part can be organized in different way than in Linux.

That is fine. There is no need in support multiple user GUI sessions (please tell real world use case for 2020 year if you think different). Even with remote desktop user just access his own PC remotely so it is still single user case. Privilege checking can be implemented if needed in app_server, Windows win32k.sys managed to do it. If you really want multi-user GUI sessions it’s simpler and safer to run each session in virtual machine.

This is not a problem for 64 bit system. All modern PC has 64 bit support. should be completed and merged and 32 bit applications including BeOS applications will run on 64 bit Haiku. Running multiple app_server’s will require significant architecture change, some master process that will manage app_server’s will be needed. Also note that app_server is not only GUI session server, there are also input_server, media_server, print_server, notification_server and maybe others.

I was referring specifically to having an accelerant work-around the usages of accelerants outside app_server, rather than reworking app_server and the kits.

No time is being “wasted”, it’s simply not being spent; I’m working on other aspects of Haiku for the moment until either I have the bandwidth to work on all 3 things, or someone can collaborate with me on them.

@PulkoMandy was talking about having the actual DRM drivers potentially run in userland, not just organizing our userland side of things differently than Linux.

Again, I know much less about how 3D all fits together in practice in userland, which is why I want someone else to help with that. I understand the general concepts and theory of how it does, but I’d like to avoid starting a project in which I know very little about all of it.

No, reworking app_server to have privilege checks is actually what would be the larger and more significant architecture change. app_server simply was designed around one desktop session per run, and that is actually a very good design.

My whole point in bringing this up now was that if we are designing a “drm_server”, it may as well be a “display_server” that manages what app_server instance is actually using the display, as well as being the “DRM Master”. The rest of the architectural changes inside app_server, after accelerants are moved elsewhere, are actually pretty easy as far as I understand it.

launch_daemon already spawns both a root process and then a process for each user. notification_server should definitely do the same. input_server should (probably) stay as one for the whole OS due to how it interacts with drivers, and switch what app_server it sends input to based on what user is signed in (this may actually be the trickiest part of the whole transition.) The rest could go either way, but this is not some insurmountable technical problem.