Is Haiku supposed to have better hardware support compared to Linux?


Most Linux distros also don’t have this feature that Windows does, its all static swapfile, though there is a program where you can install the dynamic swapfile.


Actually nowadays swap files are a historical relict. Linux for example discourages using swap files in favor of having appropriate sized RAM. Because if you ran out of memory and you start paging out to file you are in “paging hell” so better find that RESET switch on your system. Any attempt to do “clever” paging is going to fail on your… that’s why it’s hell :smiley:


At least it is the only backup option, its better than just filling up your RAM.


Linux still uses the swap for suspend to disk, and it’s nice to have it because when you fill memory, otherwise, everything crashes or freezes. Yes, you don’t want to use it, but in case of emergency, it is great to have it to save your day…


How are they a relic? Are you really saying you should be constrained to only opening applications that can fit in memory?


That’s what the OOM-Killer is for. Not nice but it prevents your system from getting unresponsive by a rogue application, something which happens with swapping but never with OOM-Killer.


If an application does not fit into RAM then paging hell is going to be massively more of a pain than not being able to fire it up in the first place.


Not to mention that on SSDs, it’s recommended to disable the swapfile entirely as this will wear out the SSD unnecessarily.


My typical use case was something like gimp *.jpg in a directory with too many files. OOM killer would kick in and kill the X server as the app using most memory (with all the bitmaps to draw). So, the kernel was safe, but still all my work was lost…

In Haiku there is no OOM killer, and there is no overcommiting of memory either. This means malloc() will fail if there is no memory available. The swap file means there is more memory available, even if eventually it does not get used (because an app made a large malloc but used only a small part of it).


I suppose use cases are different; there’s no way my work Macbook Pro could live without paging because of how bloated all the applications on it are; with Outlook, Spotify, Chrome, IDEA, etc running I’m already hitting the page file and I have 16GB onboard…


That’s quite the broken OOMKiller implementation there. It should kill the application with the highest “raise” in memory consumption since that’s typically the culprit. Maybe an old Linux kernel? I certainly do not remember such behavior and I’m swapfile free on all systems I work on.


Yes, the X server was eating a lot of memory because it allocated a lot of RAM for all the bitmaps I think. It was indeed a few years ago, I don’t do much graphic stuff on Linux these days :slight_smile:


Today this should be also less of a problem due to most (if not all) Linux distos being based on an OpenGL render backend. This kind of bitmap allocation is outside the reach of the OOMKiller :smiley:


Oh that’s interesting… So, does Haiku essentially use the old *nix “swap partition” mechanism except without the actual partition? Like you just tell it how much VM you want and a file that large appears on your HD and stays there unless you change the relevant setting?


Haiku does not have an OOMKiller because we never let apps take more RAM than there actually is. A much better solution, IMO… :wink:


Of course if you don’t want overcommit in Linux you can just switch that off.


Of course. But that’s the problem with Linux. You can get it to do mostly the right thing, but it will take hours of tweaking obscure settings.
And even if you turn that off, a lot of applications and developers assume that malloc cannot fail. So when you run out of memory, you get apps crashing in bad ways. In Haiku, at least we handle the error gracefully, and usually it results in a friendly error message to the user (“you are out of memory, close some apps and try again” or something similar).
Indeed, Linux makes many things possible. But Haiku instead makes them happen.


With SSDs its a lot quicker to use as your swapfile.


Even on HDDs the more it spins, the more chance you have of screwing something up. HDDs are more likely to breakdown than SSDs.


That would increase wear level for no obvious use (especially since the SSD driver can not “move” the file to distribute wear level across the entire disk). If you can’t handle large memory allocation properly in your application then there is something wrong anyway.