Keeping the home directory clean!


As I understand it, the defaults currently in use for the folder structure along with the package manager allow one to keep the home directory relatively clean - i.e. without hidden configuration files.

I came across the following on OSNews about dotfiles in Linux which I thought would be worth sharing:
( ).


That that guy is making much ado about nothing.

dot files are hidden by default on Linux… they are where they were supposed to be. I’ll grant you that putting them in .config in a sanely named subdirectory is a better practice though.

You could create something like the windows registry… however it has unforgiving overhead, while plain config files increase in overhead depending on what you are running not what you have installed (bloating the registry).

Also the fact that he wants people to use the XDG environment variables is very telling… that’s a bunch of rot.


In the comments of that osnews article, an argument broke out about how a database would help solve the issue. One of the counterpoints was that a database would add bloat.

Perhaps our native in-filesystem database, aka attributes, could be leveraged in such a way to further enhance the storage of config data. We wouldn’t need to add a database to do this. We already have this. This would of course be a Glass Elevator, R2 and beyond topic.


There’s been talk of not even doing attributes like they are done in BFS in the next gen R2 FS… but doing it via a dedicated database. Mainly because the attributes slow down normal operations that have nothing to do with attributes.

So Haiku would have something like baloo on KDE…


Uh, no, we pretty much depend on attributes for virtually everything. BFS performance isn’t too good perhaps (though a lot of this is due to our I/O layer above BFS), but XFS and ReiserFS, which also have robust xattr implementations have excellent performance. So I don’t think this is a concern as such.


Yes they are depended on for everything… but being integral to the FS makes it nearly impossble to optimise how they are stored or accessed. While having them stored in a database, on the filesystem is obviously quite flexible. I forget who exactly but several of the Haiku developers have talked on this point in the past… on the ML IIRC.

Also acessing attributes and accessing the file contents themselves are often not done at the same time… so performance can be improved by storing them completely separately.

Also dunno if anyone else is seeing this… but the forum is extremely slow the past few days even timing out pretty often.

There might also be a compromise where some data makes sense to put in the xattrs, and some data really makes more sense in a dedicated DB. The problem with how Haiku does it currently is that it gets lost anyway if you move the file to say Linux and open it there… so one way or other the attributes are only portable to BeOS and Haiku themselves.


Huh? Linux has supported extended attributes for a long time. You can read and write them on any modern Linux distro, assuming the underlying filesystem supports them. We use them to cross-compile Haiku on Linux when we can.

Having a database of them makes things infinitely more un-portable, so that isn’t an argument either way.


That’s not right. There are no performance problems with attributes. Moving them to a database would make things slower. Instead of just writing the attributes with the inode data as it is now, you would need to open the database, update it, close it. Everytime you do something. This would be slower.

What slows down IO operations is 1) the node monitoring system (because at every file update, we need to check if someone needs to be notified) and 2) the query support, because the attribute index needs to be updated.

  1. is not going away, because it is greatly counterbalanced by the fact that apps which are node monitoring do not need to poll for changes. So overall we have a performance gain.
  2. is… exactly like updating a database. Wether it is inside or outside the filesystem does not change anything. You still need to update it. Performance issues will not be fixed by moving things around.


That’s not right. There are no performance problems with databases. Moving them to a database would make things faster. Instead of just writing the attributes with the inode data as it is now and being forced to read them by the disk subsystem even when they aren’t needed instead of having a better packed disk format, you could just move it off into a separate database and only access it as needed. The point is it frequently isn’t needed, and having a configurable way to turn it on and off without reformating or having a separate FS is desirable.

Instead of just posting a naive response… which is contraindicated by several people that have worked on Haiku in the past… it’s best not to say anything. If someone comes up with an alternative filesystem for Haiku + database and it proves much faster and more flexible I don’t see any reason why it should not be adopted.

There are many cases where BFS does the wrong thing by default… like query support on filesystems where you are bulding software it’s much slower for little to no benefit. And if you just disabled query support that is also bad… as you now have to have 2 filesystems or forgo query support.


The inode is 1 block in bfs. It is not filled with data, far from it. The remaining space is filled with attributes. Reading them is for free, because you were going to read the block anyway (unless you also don’t need the filename, permissions, etc which are also stored there). We went as far as designing a custom vector icon format so that icons would fit in this relatively small space. There used to be versions of Tracker using SVG and the performance hit was very noticeable.

If I follow your logic, we should also move filenames out of the filesystem.

Query support on a fs with sourcecode will be useful when we start using it. Imagine a make replacement that would set an “already built” attr on sourcefiles, and an editor that would remove it. No more parsing the whole fs to find out of date files! But if you stick to unix tools this is not going to happen.



Uhh, that’s not true at all. Databases are designed for high-throughput use-cases, they aren’t designed for sparse random reads of arbitrary locations. There would be a ton of added latency by going through a database instead of xattrs.

Citation needed.

Also citation needed.

BFS performance problems are mostly journal and disk-structure related. XFS and ReiserFS prove that attributes are of almost no concern to performance.

So at this point, back up your claims with hard data (on Linux please, where they actually have a fully optimized I/O stack, unlike Haiku), or quit claiming that things “would be better” with absolutely zero evidence for it.


Using Be Inc.'s BeFS was fast, as I recall. Attributes were a dream back then. The journal was rather peppy. It was fast, and useful beyond anything else at the time. Other journaling, attribute enabled FS of recent years still seem rather unpolished in my opinion.

I do notice OpenBeFS, particularly Haiku’s implementation seems rather slow in comparison. It is stable. But it seems slower than it was in BeOS. Perhaps this is due to both Haiku and OpenBeFS still being relatively unoptimized, and having debugger options enabled by default. Maybe having larger volume sizes on today’s hardware plays a role.


That’s all well and good except, nothing is free. More modern file systems might pack the inodes differently such that the space isn’t wasted. Obviously ZFS for instance isn’t doing conventional inodes…