I’m not on the dev team, but Beta 5 has been incredibly stable for me. Why update just for the sake of updating? We’re not Linux, where everything needs kernel support and there is a new kernel version every month or so. Read the monthly updates and you’ll see steady progress, but nothing that absolutely needs to be released right now. Meanwhile things like the wayland_server get a lot of attention, bringing us closer to more seamless porting.The Firefox clones are already working better than when we first got them. But that wouldn’t show up in the version number.
If you want to live on the bleeding edge, you can. Just switch to nightlies and be prepared to dig in when things break.
Allthough I’m very happy with R1B5 here too, there are fixes in the main code that are needed to bump some libraries hitting the brick wall atm. No big deal yet, but still
Begasus surely will mention other stuff he faced (like changes in CPUSET macros, I guess). on my side:
Even just updating the builders to a newer beta5 (hrev57937_129) would help,. as builders seem to be stuck at hrev57937_111, and that sometimes causes incompatibilities with newer beta5 versions (main thing here is having to #define _GNU_SOURCE for the builders, when that’s not necessary on an up–to-date beta5.
Not a big deal, but also odd having to patch things for an outdated beta5.
@BiPolar already mentioned CPUSET, part of that is also cpu_count, the first part we can patch, the latter not as is not implemented in any for on beta5 (@korli fixed that in nightly), other then that … can’t mention atm, but some recipes are being held on hold already for beta6 (tend to move on when I can’t build for beta5, so I’m not really counting score)
Someone more qualified than I will be able to explain the difference between microkernels and macrokernels, I’m sure. But I’ll give you an example:
When I was running a linux box, suddenly VirtualBox stopped working. I asked around and was told that it needed this, that and the other to be compiled into the kernel, and that it had been taken out by the distro’s developers.
This is Virtualbox, which runs on Windows and MacOS just as a regular app.
Six months later, and about three or four kernel updates later, VB started working again.
Develop a Haiku app, and ten years later it will still run. Develop a Linux app, and keep your eyes on the kernel development team, and on the distro developers who decide which modules will get compiled into their version of the kernel, because they might do something that kills your app. Yes, you can compile a custom kernel, but that is waaaayyy beyond the capacity of 99% of computer users.
If that sounds negative towards Linux, sorry. Their way of working has its own advantages, especially for power users. And I also sometimes get annoyed with the conservatism of the Haiku devs.
We did go through a stage (around Beta 3 IIRC) where breakages happened regularly and we needed regular fixes. But we’ve reached a stage where Haiku’s predictability and stability just makes it the better option for a hobbyist like me. If it aint broke, don’t fix it.
Sorry, but that is completely inacurate. Linux takes extreme care to not break userspace applications by kernel updates. You can run applications build on the earliest linux versions on current versions. Distros regularily break this (especially glibc), but that is not a fault of the kernel.
Regarding virtualbox, they decided to develop kernel addons out of tree and distros don’t always ship them, so that’s hardly a fault of the kernel either. That’s be like complaining that some haiku app doesn’t work that needed a custom kernel the author developed
Microkernels have advantages by design …a good article regarding Minix 3 is this one www:documentation:reliability [Wiki] But this does not mean that a microkernel won’t break things…It just mean that it will break things in a more controlled and probably easier to recover way.
Good luck with that project. Linux exclusively used the a.out binary format until kernel version 1.2. That was gradually replaced by ELF and a.out compatibility was finally phased out in version 5.19.
So to run the “earliest” applications, you’d first have to compile a.out back into … the kernel.
Why was it felt necessary to develop the kernel add-ons only for Linux, while MacOS and Windows versions just carried on? Because in the design philosophy everything points back to the kernel. Which is what I stated originally, I believe.
Why do people keepebringing up microkernels? Neither Linux nor Haiku are microkernels. Their design is in fact very similar. They are both typical UNIX style kernels, implementing the standard UNIX system calls and a few extensions. They both have loadable modules for drivers.
The main difference between Linux and everyone else is their internal ABI for modules is not stable. That means modules need to be recompiled for each version of the kernel. No problem if the module is open source: either it will be shipped directly with the kernel, or the compilation process can be integrated in the package management system (for example with dkms). When the module is closed source, it’s up to the project providing it to make sure they have binaries for each kernel version. I’d guess this is what happened with your virtualbox.
Virtualbox does kernel level stuff on the other OS as well, but since the ABI is stable, or at least the releases are less frequent, they can build a version of their driver once every few years when a new Windows or MacOS version is released. They would do uhe same on Linux systems if there weren’t hundred of thems, with new ones created each day.
Open source alone doesn’t help, though. The VirtualBox kernel modules are a good example of that. If your driver is part of the Linux source tree, everything is fine. As soon as it isn’t, someone has to take care of making sure it’ll compile on your kernel. If your distribution does that for, you are lucky. If not, you’ll regularly see breakage. That’s why it’s so important to get your stuff into the kernel for Linux developers.