Got a question for you

Really no problem, you’re free to have your say, no insult or rub taken.

Besides, I still think that porting drm to haiku is a great idea, I would also like to see, a full hardware-accelerated haiku, cos I think that would be a great thing all round. If I had my way I’d be porting android to haiku, as well as making the kernel more BSD-ish, like MiNT, with the sysv ipc. I’d also like to port the BSD networking stack, not saying the haiku one doesn’t work, just preference.

I’d also like to see the kits take on a whole independent character, be more like IMKit, FeedKit, etc. I’d like to see more kits, not just the ones that BeOS had. I would like a HomeKit, a MobileKit - think KDEConnect - an AudioServer API, VR/AR, a new DriverKit, that would be Linux like - think udisks, upower, and udev - get the drivers out of the kernel and into userspace, and a whole lot more, but those are just my opinions which I probably shouldn’t have shared.

So as I said no problem, we’ve all got horses in this race, so lets just see where it goes, OK?

1 Like

Actually this is usable solution. It allows to run DRM and Mesa EGL with minimal changes:

  • Run Wayland server in separate process that act as DRM master. Wayland server implement only compositing, no input handling, window behavior etc.
  • Run app_server as wayland client and use whole screen as single wayland surface.
  • Hardware-accelerated applications are also wayland clients that create wayland surface for accelerated part and get clipping information from app_server to avoid getting accelerated surface become on front of everything. BDirectWindow mechanism can be reused to manage clipping.
  • Wayland protocol is only used internally and not supposed to be directly used by programs.

No, BDirectWindow provides access to front buffer. Copying back buffer to front buffer require explicit command that not supported in BDirectWindow. BeOS have no screen back buffer and no API to control it.

Access to front buffer have some disadvantages, for example reading it is terribly slow on most hardware.

2 Likes

So we get the wayland server on the framebuffer running from DRM? Would we want a kernel driver for wayland that could run alongside DRM?

Wayland protocol don’t need kernel support. Structure is something like this:

Kernel DRM drivers → Mesa userland drivers → Mesa common code → Mesa Wayland EGL → Wayland server/client.

Isn’t there a Haiku gallium port? I think that only supports the software pipeline, but could it be furthered to support drm and the gallium hardware drivers?

Also there’s some DRM code in X11 could that be useful, as it’s supposed to be X’s interface to DRM

Yes, but porting Linux DRM kernel drivers is needed. Gallium drivers use kernel DRM drivers.

I think it’s either PulkoMandy or Barett17 that would know more about the state of the DRM port.

Actually just loked it up here: https://github.com/hamishm/haiku-drm looks like nothing’s been happening since 2015

Some recent and probably most progressed work on DRM port: https://github.com/andreasdr/haiku.

What do you think about the port?

FWIW, Haiku does already use networking drivers from FreeBSD using a translation layer.

IIRC this is planned for post-R1 as part of Project Looking Glass. Tbf, parts of that have already been implemented in Haiku. Not sure if this will be reconsidered for R1, though.

@win8linux
Sorry to hijack the thread, but could you elaborate a bit regarding FreeBSD network stack? Has Netgraph been ported as a part of it?

Are you thinking of the dalvik VM or ART? I think both of these probably could be ported, and in a more efficient way than linux android support is currently (normall android initializes all java classes on boot to make application launching fast via fork(), but the android desktop support ive seen launches apps without this preloading, leadinng to some pretty bad load times.(i havent tested this in the last tow years though, so could be better))

audioserver kit, well we already have the media server and media kit, i think that might be what you think of?
There is nothing stopping one to add new kits btw, we already have two “new” kits, beeing layout kit and webkit (altgough the native api for that second one is still missing, but hopefully with wk2 working i or someone else can work on that (webkit could use some help in any case! mostly just pulkomandy working on it for the pst years and me recently aswell))
I generally agree that some drivers should be more in userspace anyhow, but i also think drm should be, I am still hoping to be able to write native accelerated drivers, even if only for opengl (ommiting machine learning for instance

As a somewhat related idea for third party apps, as a potential alternative to wayland native api for them, i am kind of interested to see if we can get nvmm ported and be able to run single application instances of third party app vms via SPICE as a native app_server client, that may be possible to offload some complexity from the package manager, or abstract it away anyhow.
Note that i was disagreeing with making the app_server switch to wayland as a native protocol, since that imo would be a compatibility disaster, don’t see much in the way of making a wayland server a client of app_server or a proxy of sorts.

OpenBeOS is the old name of Haiku. So the code there is just a 20 year old older version of what we have now, with less features and more bugs.

That does not change a lot. app_server uses various accelerants to communicate with the video hardware, including a “framebuffer” one where it does not in fact interact with the hardware at all, but just gets passed the address of an already existing framebuffer. In this situation, no videomode changes are possible. This driver is used in EFI machines when no other driver is available.

At some point you will be removing everything that makes Haiku, well, Haiku. I’m not sure what you mean by “more BSD-ish”. Our kernel is nothing too unusual and allows to implement most of the POSIX APIs already. The big difference is we wrote it in C++ instead of C, but this does not have a huge impact on the design.

The state is nothing usable or even compiling, as far as I know. I did not get very involved with this, the reason is, I tried to read the code of the Linux drivers and found them very complex and badly organized (files with 5000 lines of code or more get me lost). After all the efforts we put in nice native drivers, I am a bit annoyed to give up on this and start using this code from Linux, that only people working at Intel can understand and maintain (in the case of the Intel drivers, the other drivers I don’t know as well).

But that is just personal pride and a bit of “grand cheme ideas” and the short term interests of Haiku may very well be elsewhere.

As for new kits, it’s easy to have lots of ideas, but someone has to write the code. Currently I feel that it’s more important to keep things working for me: a decent web browser, a good video driver (O wish I could use an external display with my laptop…) and fixing some of the most annoying bugs. But I hope my work in these areas can also free other people to do the cool things.

2 Likes

nice to know what you like to do…

yeah, I know, but to be quite honest, I was thinking of net80211, and netbt. If the BSD stack could be made more modular, it could be a great addition.

More kits means more that Haiku could do. Getting speech natively supported would be the first thing, maybe something along the lines of Jovie?

I don’t think anyones working on porting the BSD networking stack, but I see no problem with porting it. It’s just that someone needs to do it.

ART because it’s the replacement for dalvik. If done right, being that the android environ uses Binder for it’s IPC, not only could the Android stack be ported over, but we could probably make them native components.

AudioServer would be seperate from media server/kit. i saw that the BeOS had a seperate AudioServer, and thought that it would good to re-implement it in Haiku.

Native accelerated drivers are a great idea, a drm drawing/driver kit in app_server would probably be a usefull scenario as you could draw directly to the hardware, and only load the driver for the machine.

Has anyone ever looked at MGL from scitech. It’s supposedly a user-space DRM that supports hardware-accelerated OpenGL. It would require a bit of work though as it’s kind of outdated.

I don’t know much about the Spice protocol, so I don’t know what I should say here.

By making the kernel more BSD-ish, i meant like in a similar way to MiNT. It hasn’t lost what makes it MiNT, but it has the kernel-apis to make it more like BSD. Nobody wants to lose anything that makes Haiku, Haiku.

But we actually want the app_server talking to the hardware. The drivers would be addons of the app_server. The app_server would have a little hardware abstraction layer, that would talk to the hardware load the driver it needed. We could have a drm kernel layer, the drivers would be in user-space, (the accelerants, x11 drivers), and the main drm code would be part of app_sever.

I think that starting from the BSD DRM port would be the place to start - The one from DragonflyBSD is actually really well written, and the compatibility layer could very well be useful in the long run – having a compatibility layer for both BSD and Linux couldn’t hurt - Also, there’s nothing stopping anyone from taking it their own way — like moving a lot of the DRM code-base into the app_server - making the drivers more modular, and loading them in user-space.

a VirtualDisplayDispatch driver in the app_server could probably do that:


The Radeon driver has a multi_monitor support class already existing, wouldn’t be possible to write a class for the app_server that supports multi_monitors using the remote and workspace apis?

2 Likes

Well, MiNT started as a completely different OS, with no POSIX support. On the other hand, Haiku already has fairly good POSIX support, and porting existing apps is easy because of this. So I think there is not a lot left to do there.

That is already how app_server works. Our graphic drivers are a small kernel-side part that will map the hardware in memory and handle interrupts, and all the code for modesetting, etc is running in the accelerant in app_server. The accelerants expose a common API that the app_server can use.

app_server can already render thing to offscreen bitmaps. It can also send the drawing commands to another computer to draw there (remote_app_server), and we have a client for that which renders in a html5 canvas. So you can use any web browser as a remote desktop client. The app_server side seems pretty ready to do anything you would want with it, at this point.

The main question is how well it will follow advances in Linux drivers. We know that Linux provides no stable API for drivers and they keep changing things. So for every new Linux version, a large part of this work needs to be redone. It is sad that this is done, independently, by FreeBSD, Dragonfly, Genode, and maybe someday Haiku as well. It is also sad that Linux becomes the de facto standard driver API when there could be something nicer and cleaner, or at least, a bit more stable with less API changes.

My understanding of DRM is that a lot of things are already moved to userspace (in Mesa/Gallium3D). I don’t know if it’s possible to move more of it. Maybe the modesetting part. But it probably mean changing a lot of the driver code to rearchitecture it. At which point it is questionable why we would bother starting with the Linux drivers, if we are going to rewrite it all anyway.

But then again, if you think that’s an idea worth exploring, please do, and even if in the end it doesn’t work, we will all learn something about it :slight_smile:

The Radeon driver was written to add some limited multi monitor support even in BeOS (where changing app_server was not possible). What it does is allow you to configure video modes for two monitors, but then expose a single big “virtual” monitor to app_server.

This has some limitations, for example, CenterOnScreen will center windows midway between the two displays.

Once you have a driver working that way, it is not so hard to then change app_server to handle multiple monitors in a more advanced way. It will require a change of the accelerant API, basically adding a “screen identifier” parameter to all functions called from app_server to the accelerant. There is already a patch starting to do that in Gerrit.

These efforts have been on pause on my side mainly because I use intel video devices and I did not manage to get any kind of multiple display support working there at all. But it looks like rudolfc is now looking into that, so maybe the situation will unblock soon :smiley:

2 Likes

The main reason to start from the Linux drivers is because they’re more modern. The AMD driver supports the latest vega cards, Intel’s i910/i810 support the latest intel cards, and the Nouveu driver supports most Nvidia cards.

It’s the same with wireless drivers, ALSA audio drivers, and others. This is because in most cases the drivers are supplied by the manufacturer, so they are the newest.

The Mesa/Gallium3D drivers would be the accelerants in Haiku, whilst the linux drivers would provide the communication with the hardware. But you’re right that the idea would probably need work before being finalized into implementation.

1 Like