Why BSD in Haiku?

I’ve seen a couple of cases where inspiration taken from BSD systems for Haiku, such as the FreeBSD WiFi compat layer, the current NVMM project, etc. I’d like to know what’s the reasoning on taking that decision: why BSD and not other system (like Linux), why not making it from scratch (I think the answer to that is kinda obvious but there’s always a long explanation :wink: ), and idk whatelse is considered.

On a different note, idk how to phrase this, but I’m curious on the extent of the Haiku API being object oriented should involve the kernel being object oriented as well. I know it’s more structured programming, and I think I can tell why but I’d like to get the facts and opinions from the experts we have.

I ask this stuff because I only know the basics on OS theory and I’d like to get deeper, and I’d like to get to know Haiku better, so any knowledge is very appreciated.

Haiku has a limited pool of developers, who in turn have limited access to hardware. Reusing BSD stuff when possible makes possible to extend hardware support, while using a compatible license.


I think this really depends on what is needed, and what exists.

The BSD‘s tend to produce great (self-contained) somewhat portable code, for linux sometimes kernel subsystems require a lot of other linux kernel systems, and so are not that easily portable.

Another reason is the permissive licensing of BSD code, so no gpl.

That beeing said, we do have components from linux, or which were intended somewhat for linux too, the bash shell, gnu coreutils, musl libc.
(not neccesarily only for linux, but i think you get the point)

But this really is a case by case thing : )


I don’t know why but I’m glad it is as it is, because BSD > Linux. Haiku is its own thing (and that’s great) but I had a “BSD feeling” right from the beginning I started using it seriously.

Again, I don’t know because I never looked at the kernel, but the API is object-oriented - and that’s a good thing. However it is worth mentioning that object-oriented programming is not the solution to everything. Sure, there are things (like the API) that it’s much better to be implemented that way. But there are other things where functional / modular programming is a way better approach, and an object-oriented implementation does nothing but making the code more complex without any real benefit. I’m not sure the kernel itself falls into the object-oriented-preferred category. My guess is it’s not.

There are several aspects:

  • The license (Linux uses GPL, Haiku and BSD both use BSD style licenses which are less restrictive)
  • Stability: Linux tends to change a lot of things inside the kernel all the time. When you port one version of their driver, and all the compatibility layers it needs, it will often be very difficult to import code from newer versions. Whereas in the BSDs, things tend to be a bit more stable (maybe only because they have less developers, but still)
  • The BSD code, from my experience, tends to be better commented and documented, and easier to understand

If you word it this way, the answer is, not at all: the interface between userspace and the kernel is a very standard and boring syscall system, very similar to what you find in other UNIX inspired systems. The object oriented API is really just a wrapper above a traditional UNIX system, for the most part.

However, the kernel, like most other parts of the system, is written in C++. It just gives us a few more features than C (templates, classes, private fields, …) that allow the code to be a bit simpler and more readable than plain C. There are some places where a C++ interface is used, but it is not that common. Often the interface (for example between the drivers and the kernel) will be written in C with structures and function pointers. There is ongoing discussion to change this.


Just in case anyone’s interested, Haiku also has another OS feature that’s kind of like OOP, but - just for OSes - is perhaps much more powerful - and that BeOS didn’t quite get: the file descriptor.

Compare with BDataIO, the BeOS base class for things that Write and Read like BFile etc. You’ll use them here and there for things like the Flatten() methods. On UNIX, and on Haiku, that class wasn’t ever really necessary, because anything that supports I/O will have a file descriptor and the support for reading and writing etc. on it that goes all the way to the system call level.

BeOS had file descriptors, of course, by virtue of its UNIX like command line environment, but like others of that era, they sadly missed the significance of what they had and left their network sockets out of it. That made network applications harder to write and especially to port. I remember sweating over ssh … bleah. It wasn’t only BeOS, there were others in that era that did the same thing thinking that they could buy a network package to put in with their (typically) SystemV OS. BSD of course did it the right way, and their socket system is what we use today. Be eventually saw what they needed to do, but the lights went off on BeOS before that integrated sockets made it into a release.

The file descriptor of course works in any programming language that bothers to provide some access to the read and write system calls. In the shell, for example,
echo error! >&2
writes to unit (i.e., file descriptor) 2, which is used for error output. Shell I/O can open and close file descriptors, dup() them to others, etc. Units are typically open on disk files or pipes, but of course it doesn’t make any difference. The stdin/stdout/stderr system more typical in C programs is a buffered I/O built on top of file descriptors, for the efficiancy of batching up I/O and economizing on system calls.


On a somewhat related note, has switching to using Zsh been considered? I know that historically Bash was picked since it’s what BeOS shipped, but Zsh is mostly compatible with it and has a more permissive license; it has the same license that Haiku has (MIT).

1 Like

Well, we don’t really push that as far as Linux does with its signalfd, timerfd, epollfd, and eventfd.

And the fact that a lot of the network stack has to run inside the kernel just to have sockets as file descriptors remains debatable.

Surely a system where both file descriptors and sockets are user-space concepts would be interesting. It’s not uncommon to want to write something to a memory buffer, but find out that the API expects only a file descriptor, and so, no, you have to go through some object in the kernel to get the data out (or in).

That’s where BDataIO comes in useful: it extends the concept of “an object you can read, write and seek” further, allowing to use it with custom object from an application, and removing the need for kernel backing that is required for file descriptors. So, I don’t agree that it is “not really necessary”. It is also used for buffering (which is essential for good performance in many cases, but also not necessarily easy to implement in a one-size-fits-all way, needing adjustments for each case).

It’s in the depot and there is nothing stopping anyone from using it.

We have fish too if that’s how you swing.

1 Like

I don’t really see file descriptor sockets as a major feature in general. The only issue for me, e.g. with sshd, was select(). If BeOS could have provided a alternative that would have replaced select() with something that could wait for (socket:0 || tty:0) (and maybe there was already that wait_for_objects that I just didn’t know about), I don’t know that any other UNIX fcntl type ops were that important. I’m sure there are a few applications that use it, but far more typically sockets are a distinct I/O situation, other than select/poll.