Haiku: Where to innovate?


I thought you know all too well what happens when you give a single dev a few month of time to redesign a whole system API. This is how the BeOS media kit was done.

Designing core APIs like this need a lot of time surveying existing applications, their use for the API, the problems they hit, and then thinking about solutions. I think this is much better done in a group discussion than by a single developer working alone. I don’t think anyone can have a good view of everything: the implementation details that make a solution workable or not, the use in native applications (both simple and complex ones), in ported applications, either using a ported toolkit (Qt) or using native options (WebKit / WebPositive), with realtime needs (media stuff), etc. No one can pretend to have experience and vision of all these things at once.

So what we need is get everyone ideas and try to build a common vision of where we should go with things. Real life meetings are an ideal place to do that, but it may not be convenient with our team, so we will have to fallback to some less practical way of doing this.

On to OpenBinder, and before even discussing the details of how to implement it…

There are two major annoyances I identified in the BeAPI while working on applications.

One is the lack of a signal-slot like system as in Qt. In Haiku, you have to do a lot of subclassing to handle incoming messages, because there is no way to plug things together “from the outside” as you would do in Qt by connecting signals and slots. A little-used API actually allows this: StartWatching() / SendNotices(). I think we should rework most of our objects (BControl, possibly network stuff, etc) to use that instead (or in addition to) sending “well known messages” to a single target. Note that I don’t get into implementation details here, maybe this is orders of magnitude slower than usual BMessages currently.

The second problem is the pain of writing the MessageReceived function everywhere and handling messages. This is where the Binder comes in. The idea is to implement the usual RPC (CORBA and these things were hyped back then) and allow direct procedure calls, even between process, hiding the low-level construct of message passing. This fixes the second part of what signal/slots achieve in Qt: no need to think in terms of message passing anymore, we really get to think into events and callbacks triggered by them.

The main advantage over Qt is that Binder also bridges between applications, something that is key to the BeAPI. Ideally we would go further and bridge accross machines, allowing this to work transparently over the network as well. And we can already bridge time, by serializing a message to disk and reopening it in the future, it seems useful to keep that possible as well.

Now, we can evaluate our options. Is Binder still the best answer for it, or are we still blinded by Be marketing and hype around it? What did other people achieve in the last 20 years? Were there lessons learned about Binder after its use in Android? How does Qt do these days? What about Plan 9 (for the network transparency bit, at least)?


Not really. Haiku has managed to get where it is because we managed to work as a team and have an agreement on where we want to go. What if each developer did his own thing? At this point, we would have Axel doing all it can to keep gcc2 working, Ingo migrating apps to use Qt, Barrett introducing the Binder, while someone from Aukland converts everything to use their own layout manager.

Each of these individually is a “can’t hurt” thing, but there would be no direction. This is the difficult thing about R2 in Haiku. There are many directions we can take and we never dared to really discuss this because we know it’s going to be a difficult thing to do. Some people will not be happy with the decisions taken and will probably leave the project by lack of motivation on something they don’t believe in. Personally I’m not interested in an OS based on a Linux Kernel and a Qt desktop environment, for example. But this is what some Haiku devs would like to push (none of them both the kernel and Qt, but this is what would happen if we listened to everyone idea).


Well, it is a very optimistic estimate. But if there’s enough design and agreement behind I don’t see why it can’t be done in a limited timeframe. As an example take the package kit, contract themselves weren’t all that long. It took a while to complete the infrastructure. But the design itself has been largely discussed before and with the Ingo’s mastery it is overall a golden nut among our old API.

Regarding the media_kit this is partially true. Because was introduced in R4 and they just did the error to continue developing it for R5. Considering the Be Newsletters it took a bit more than a few months to implement it from the ground up. But, it wasn’t terrible at the time, it was good enough. It is just that after 20 years looks like a 60’s Alfa Romeo.


I agree completely. But I don’t pretend to be the oracle. But when people telling me our API design has not problems, I can just think they are blind. For what it concerns to me, I am concerned to the media framework. We can contribute ideas from our area of expertise.

True, but also relatively true. Lots of stuff around, like web standards are designed mostly in mailing list or IRC discussions. I think it is overall feasible, and anyway, there’s little partecipation in meetings so it’d be anyway a small portion of people gathering ideas. We already had a discussion for why I think people at BeGeistert should not pretend to decide for everyone.

I understand perfectly what you mean. Back in the days before layout was here, you had to inherit a lot of methods and the resulting code was painful. I think, as I proposed in past already in the ml, the “OnSomething” pattern which android inherited from the OpenBinder can be something to look. This works really in a similar way as signals, and the binder provides support for something like that.

Incidentally, one of the major problems of media nodes is the HandleEvent thing.

Yeah, I agree completely. As you note the binder provides us that pattern, and it is really an evolution of our current one. It supports messages, they just become less important for the client user of the API.

I am not a big fan of signals. While they are in the end effective. Bridging applications it is a key point of the Android API too. Contexts allow all those things in applications, like you open the calendar add a date, return to the controlling app. Believe me this is not trivial. Google devs didn’t do it by chance. It is a well thought security feature that allows app like Whatsapp to be protected from possible malware. In the end, this works really on top of the B_DO_NOT_RESCHEDULE thing.

The fact that android is built around the same concepts (Contexes, Binder, IDL) should be a key indicator. Nonetheless we are not obliged to use the OpenBinder itself, we can also take stuff from android as well, consider that internally the android API is whole C++.

There’s a key difference however. Qt is designed more like a GUI framework. It doesn’t pretend to be the central API. While the Binder is designed to be the core API of the OS. So, i’d carefully evaluate Qt stuff in this regard.


I agree the implementation in Qt (with moc and everything) is not so nice, and the syntax and buildsystem which result are a bit confusing. I would prefer a plain C++ implementation.

However, I’m first looking at the concepts before digging into such details. You may have thought this through already with your own vision, but let’s discuss this in the open to make sure as much people as possible share the idea and see where things are going.

I agree Android has some nice ideas in terms of how things are designed from an user experience point of view. We should go much furthers in terms of thinking about “activities”, rather than “applications” (which only provide activities to the user and to other applications).

There is a lot of good ideas to draw from there: activities, the way applications are sandboxed from each other yet they can manage to exchange data, etc.

I fail to see how B_DO_NOT_RESCHEDULE fits in there, however. It is a very low level thing, and only has an impact on performance as far as I can see (it saves on useless context switches). I would concentrate first on getting things running, and only a little later on such low level aspects (keeping them in mind however, to see if what we’re designing can work with acceptable speed/reactivity).

This is maybe why I’m not so bothered about using the Haiku kernel: it may be slow, but for me it gets the job done. At this point in the development of Haiku, that’s all I need. And the idea of an OS kernel written in C++ appeals to me. My short experiences digging into Linux sourcecode conforts me in this. I prefer to build upon this clean codebase that lacks many features, than trying to make sense of Linux internals. This is of course irrelevant if you consider the kernel as a black box and don’t plan to ever touch anything in it, and I understand it is a lot of work, but I’m not in a hurry, I work only for myself here, I don’t have a customer waiting for me to ship something next month.


Oh I agree, however if someone really wants something a certain way… and they are unflappable in their resolve that it is best then let them prove it rather than downing them. This conversation wouldn’t happen if that system worked 100% … but then no system every does work 100% so I’m just pointing out an obvious resolution to the issue. Also we should all be aware that none of us are all knowing, and other people have different ideas about how to do things that are perfectly valid but may not make sense to us at first.

As far as reviewing the code for usage while while planning development that is exactly what I had in mind, but with the addition that you also take that code review and generate a synthetic test case out of it… to be used to validate that it is good enough. That would be valuable as a regression test also.

Sort of like the arewefastyet.com site for firefox… except as an application that tests the support kit. Probably with a benchmark mode that everyone agrees on is a valid synthethic representation of potential Haiku usage, probably another with pathological cases, and a live tunable one.

I mean if you actually want things to be better, you have to do that sort of work to prove it to yourself as you go along, otherwise you could make changes that cause serious performance regressions and not know it until you’ve wasted far more time than you would have building the test app to begin with.


I think they do that mostly because the mocs allow for better portability. The binder address this using various methods like IDL and relegating non-portable parts to very defined areas.

I am available to discuss as always, but you may agree there’s little we can invent at this time. All the major OSes provide equivalent functionalities, .Net for windows (which is a spiritual evolution of CORBA), the Binder for Android, Qt in linux and so on.

That’s why I am all for beginning to introduce BThread and BProcess. Those are needed to fit the BActivity pattern.

Indeed. That’s a primary security feature of the OpenBinder.

I spent half hour to recover the article where one of the BeOS/Android devs explained it with great detail. However, while in BeOS this is mostly a performance functionality, in the binder this become also a security measure. Because, other than avoiding threads to ping themselves while waiting for a response, in the android binder it acts like it avoids anything to take the control and to interfere with this client-server relationship. Once I recover that article I will link it.

My problem isn’t really being slow or not. The problem is that I can’t do anything of what I’d like to do due to drivers and supported platforms.

I am fascinated by this idea as well. But at some point you have to realize that you need to use what fits your needs. Over time I became much less of a BeOS enthusiast and more of a linux fremen.

I personally had a quite good experience with the kernel code. Yeah the whole GNU/Linux ecosystem is going to be painful, but it is also true that you are not obliged to take everything.

It is not a question of being in hurry. The main difference is that for you the Haiku desktop is enough. I can’t live with it anymore, without having a lot of things I use in my job. There’s really a lot of stuff missing. If I had the Haiku userland on top of linux I could probably have a good compromise.


I have an idea that might be of interest. Not sure how useful it will be though…

A lot of OS messaging frameworks are ‘fire and forget’ or ‘yes message was received, things are now happening’, whereas CORBA/function call type approaches will block and wait for a response.

I was wondering if there’s any need for a unix pipe use case, but perhaps a more advanced version. E.g. one that can apply ‘back pressure’ if it cannot process data as fast as it is being generated, or can ‘spin up’ multiple instances of stream processes to scale out and handle more throughput. Kinda like a mini inter process kafka set up.

I was wondering if it might be useful for complex multi task workflows such as joining video data streams to produce one output video, and applying affects to them - to really maximise throughput in such a complex processing flow.

Just an idea. May not have any practical application at the OS level…


We need a formal QA team. However in terms of performance regressions, we have some users very trained in spotting them, so at least we can become aware of them.

The moc is also a remnant from the past of Qt in early C++98 days. With modern C++ there are most likely ways to do this without a separate preprocessor.

I disagree here, but it is a matter of how each of us prefer to work. I think you like diving into the code and experimenting, while I prefer spending my time thinking about designing the right solution, so that later on I spend less time writing code. This is fine, experiments provide some data about what works and what doesn’t, if you don’t mind changing the code later to make it better that works just as well.

Once again you are jumping way down into implementation details here. For me the activity concept is first a matter of how the user interacts with the computer. If we push this idea as far as it can get, from the user point of view there would be no applications anymore. It would just be working with documents, sharing data from one to another, etc. Android didn’t go this far, I think they were headed in that direction but then iOS introduced the “app store” and they had to bring front the notion of apps again. I may be wrong, I did not check the details on this.

Anyway, I could imagine implementing something like this even with the current API and its quirks. For me it revolves more around the use of scripting and the MIME database, and possibly a Binder registry, than something about threads and processes (which are, again, very low level concepts).

The BeOS kernel was not written in C++. This is something new in Haiku and one of the areas where we are leading. But we’ll see what fuchsia has to say about this, they will likely be the ones finally putting that in production.

There are many problems with the Haiku desktop (lack of a decent web browser, of a dropbox client, these days difficulties connecting to closed source chat services such as discord, etc). But I don’t think any of my problems would be solved by swapping or improving he kernel (ok, maybe I would get a working webcam… I rarely need that). Which is not unrelated to the fact that I’m not currently putting much efforts in kernel development, and instead focusing on WebKit.


XHCI freezing the whole universe…


Uh well sure, but don’t conflate the issues… one the QA team is dealing with issues we currently already have and will have and the other is a development style to head off future issues and wastage of effort.


No. I design as well and think a lot before touching any code. But also I like to construct my ideas from the bottom-up in a layered way. Too much planning is counterproductive as well. There’s a “in medio stat virtus” way to do things. Assuming that your design is going to be well-thought makes things like the media_kit happens. It is very good on the paper, but…

Let’s talk about concepts, but it is somewhat counterproductive if you overlook those concepts. Because you don’t put concepts without thinking of their pratical realization at least a bit.

So for example, at this point I’d propose to go the OS/360 way, and remove completely the concept of file. Just use attributes, and all files live in the root without directories. Attributes defines the organization of those files, and you don’t call them directories anymore but something like file groups. Now, let’s realize this without thinking of the pratical realization and we will end up into something unpratical.

But what’d be the advantages of something like that? Do you really believe apps can be modules that can be plugged together to form the user experience? It is somewhat similar to what the media_kit attempted to do resulting in a disastrous result.

That’s what OpenBinder provides more or less (i.e. IDL).

I did not born yesterday.

Zircon is not full C++ however. It is more like a C/C++ hybrid. It tends to be plain C in the low level, and more C++ in the high level.

A working webcam? I think you underestimate it. But well, I am not here to convince anyone.


My estimation is that in pratice there’s very little people using BeOS era sw. Especially if we exclude nostalgic use.

This is the old chicken-egg problem. What to do first? If you want more users, you have to make Haiku very convenient, ideally for a range of different “target” audiences, including casual folks but also “power” users or experienced users.

The inital impression is also very important, e. g. things should “just work”.

When it is about programming languages, be it ruby, python, perl, lua … these should also “just work”. At the least the latest stable versions should work.

On Linux it is trivial to get all of them to compile. Haiku should strive for that too - either provide it as a binary, or make it possible to compile them. Power users can do the latter.

I am aware that this of course depends on upstream developers too, to write portable code and so forth, but from my experience, they are very recipient to listen to people who report what ought to be fixed. But if only a very few people use Haiku, then this goes back to the chicken-egg problem … it’s a vicious cycle.

IF maintaining is the problem, you can try to aim for ONE stable version that, at the least at one moment in time, has worked well (day X). Often people can work around some problems and can even use older software of a programming language, or any other software. So if this is easier to do, then I recommend going that way. It is probably easier for python since there will be more python users, but I think all the lightweight “scripting” languages should be usable on Haiku as-is.

So for example, at this point I’d propose to go the OS/360 way, and remove completely the concept of
file. Just use attributes, and all files live in the root without directories.

I do not think it is good to make any radical changes at this point. Keep things as is, focus on stability and getting a great Haiku release out. IF for some reason some change has to be done, I’d rather recommend to focus on the next big major release, rather than the current procedure which, IMO, should go for stability and usability, rather than major innovation.

You want many people to use Haiku in the first place, so focus on this part first. It’ll be the better option in the long run.

cb88 wrote:

I mean if you actually want things to be better, you have to do that sort of work to prove it to yourself
as you go along, otherwise you could make changes that cause serious performance regressions and
not know it until you’ve wasted far more time than you would have building the test app to begin with.

Agreed. Things shouldn’t get worse from beta -> to stable release. :slight_smile:


You know the uvcwebcam driver should probably just be included in the nightlies as long as it doesn’t outright crash the system… letting it rot in the repo hasn’t helped it in the past years.

It’s harder for people notice and fix things if they don’t know they are there and just broken.


It won’t work; it needs isochronous support in the USB stack which is also not implemented. Same for the usb_audio driver.


Oh, for some reason I thought that was done.


<< Trying to fix some. Ported a test version of node, upgraded python recipe for 2.7, working on golang 1.4bootstrap right now (1.3 works tho) to bootstrap latest version. Dev software needs some love.

To the general Haiku pro’s here: I was unable to work with the following non-dev user setup:

Haiku RC1B1 64 bits, booted from usb (3.0 stick, usb2.0 port).
Plugged 1tb external hard drive (3.0, usb2.0 port), partition is NTFS, mounted R/W ok.
Download a torrent (4GB) using Qbittorrent to NTFS drive, also using aria2c to NTFS drive (no preallocation).
System became unresponsive (as in not launching new Apps) untill i kill the apps doing work on the drive.

Solving or improving usecases like that so users can do daily usage tasks, looks (to me) more important than checking if the kernel needs improvement because XYZ.


Linux has a new syscall that might be be of use to implement .


Also OpenBSD’s pledge syscall seems like a good addition https://man.openbsd.org/OpenBSD-current/man2/pledge.2


Of course. That was an example how you can fool yourself by thinking about concepts without pragmatism.

Needed citation: “Consider the practical effects of the objects of your conception. Then, your conception of those effects is the whole of your conception of the object.”


Not true!

People keep forgetting that input isochronous transfers are implemented already. This is all we need for webcams. The driver has other problems, however, and I could only get it to display a black rectangle and crash either the kernel or the media server.


Indeed. I think I just have a top-to-bottom rather than bottom-up approach. I start with a “would it be nice if the user could…” and then go down in OS layers as needed to add everything I need. From our discussion, it looks like you have identified some problems in the media kit, and are now working bottom to top: starting with the kernel (replaced with Linux) and then moving up to the API, layer after layer, to build something new. Both ways are legit, just different ways to see things. I think it sometimes makes your reasoning difficult to follow for me (I need to “reverse” it to see the top-level implications of what you work on). The opposite is also probably true.