Questions about Haiku OS

The difference in performance between C and C++ is negligible compared to C and most other languages. System architecture has a lot to do with performance. The pervasively integrated API/user interface of Haiku has not been matched by the big three Win, Mac, Linux after all these years. Even in this pre-beta stage, the responsiveness vs today’s big three offerings is comparable to BeOS vs the big three in the late 1990’s, early 2000’s. That’s quite respectable if you ask me. Because of Haiku’s integrated Kit architecture, I don’t see that advantage going away anytime soon.

Argh, so much wrongness around here.

Haiku is slower than Linux, a lot slower. Running any benchmark will show that. Of course Linux has tens of developers working on it, while Haiku barely has a dozen of them and not even full-time. What did you think? If it was possible to get something faster than Linux with so few efforts, it would have been done already a long time ago and Linux would be dead, and Android would be running the Haiku kernel!

However, Haiku focuses on desktop computers, and we design things so that the user interface feels smoother. Not faster, but smoother. This is a different thing, and Linux has trouble doing it because they also target other domains where you want things to be actually fast (servers) or realtime (embedded systems). The constraints of these different usages are conflicting and Linux cannot be the best everywhere.

As for C++, there is a misconception that “low level is faster”. This is not true. We have modern compilers that will do a lot of optimizing work. They can do better work if the developer can express what they need in an high-level language, and let the compiler make decisions. C++ has added a lot of extensions to C: templates, to generate fast but specific code. constexpr, to tell the compiler that something must be a constant and can be optimized away. In general the language allows both a nice structure of programs, and giving hints to the compiler so it can generate fast code from it. It is possible to reach the same performance in C, at the cost of writing and maintaining a lot of code manually (I know this all too well, my paid job for the last 3 years is writing C and I all the time wish I had this or that C++ feature instead of spending days rewriting a slower approximation).

However, Haiku does not use all these nice features yet. There are two reasons for this. Compatibility with gcc2 which does not support all of them, and the “don’t optimize yet” rule: get your code working first, by writing it in a simple, readable and easy to follow way. Then, identify performance problems, go back and make changes as needed. This avoids premature optimization, which generates buggy and hard to analyze code (because it ends up being more complex) and sometimes prevents seeing more high-level and more efficient optimizations.

Finally, remember that all versions of Haiku we ship and the default build settings have “paranoid checks” enabled in the kernel. This means there are a lot of verifications for unexpected things happening. You could disable that to get better performance, but it would also make it harder to analyze bugs, so for now we are going to keep it this way. And the more we progress, the more I think it may be wise to keep it this way for security reasons - this also somewhat protects our kernel from malicious attempts to corrupt the memory.

1 Like

Whilst it’s not always possible to achieve the maximum possible throughput and maximum responsiveness in the same moment, it is possible to choose a trade-off, and indeed even for the system to adapt on the fly.

As an example, in BeOS small scheduler quantums were used, this trades some throughput for more interactive performance under high load. Haiku more or less copies the small quantum choice. In Linux the quantum size is a tunable and the system dynamically alters the quantum size while it’s running unless a process has Round-Robin (real time) scheduling. So a server and desktop PC running the same Linux kernel can have quite different performance under load from this one tunable alone.

Haiku’s kernel scheduler also have a dynamic quantum sizing since several years now.

The main difference in user-visible responsiveness I guess is more due to bottleneck in Unix’s UI asynchronous handling, while BeOS and Haiku goes by design to parallel handling as much as possible at every stage of UI stack.

The origin idea of BeOS is BeBox, and the idea behind the bebox was one cpu per person is not enough.
It was in 1995, at a time when multicores personnal computers where not at all mainstream like since last 15 years, and it drive all the design of the operating system, while well known graphical operating system were all designed at a time when one single CPU was the de facto situation, which shows on their graphics stack.

Today, from top (software) to bottom (GPU), for best performance everything should be heavilly asynchronous and paralleized. It’s at the heart of Vulkan design, for instance, and it’s not without good reason.

1 Like