Grow a BFS filesystem

It would be really beneficial to be able to grow a BFS partition. I’m aware of earlier work:

Still there’s noting yet that I’m aware of.

I like to update my guide for installing to a partition using dd directly but this is going nowhere. After dd-ing to a 20 GB partition, I’m stuck with a 600 MB or so filesystem with not enough space to even update the damn thing. More than 19 GB going to waste. Yeah I could use even another partition but I’m running out of MBR partitions and even then why would I do that? (Could also use QEMU.)

I gave up dd-ing, tried using the graphical Installer but got some weird data error. Using dd again from inside the Live USB worked, but or course resulted predictably in a 600 MB filesystem.

Too bad it’s not just one hex value in the superblock or something (or is it?); if it could be done offline that would work also for me.

The code from 2012 was nearly merged. This ten year old problem needs some love from a Haiku dev. I’m aware that whining and not contributing code myself is not helping much, I know. I just would love it to circumvent installer issues and be able to easily dd an image to some random partition. It would make testing and installing that much easier. It just needs updating the guide (I need to fix an anyboot to raw issue) and … filesystem resizing.


Getting the feature to resize BFS partitions would be very nice, obviously. But so is having a robust Installer application. You may want to open a bug report with the exact description of your circumstances and possibly a step-by-step replication.
“Weird data errors” are not in Installer’s job description. :slight_smile:

1 Like"bfsresize"+(status:open%20OR%20status:merged)

1 Like

The “BFS resize” patchset is large and dangerous partially because it includes support for shrinking as well as growing. There are a lot of reasons why “online” shrinking is a particularly dangerous idea (for instance, it requires inode remapping, and as the kernel VFS has a ton of assumptions that inode values will not change, this is kind of invasive and prone to causing bugs.)

Expanding BFS “online” makes much more sense, and shrinking can be relegated to an “offline” task. However that will require reworking the patchset considerably and merging only small portions of some commits. Nobody has yet dared try to do that. It’s on one of my TODO lists but it’s pretty far down.


This is NOT primarily about the installer. See the topic title.

That indeed makes lots of sense. Shrinking is dangerous, I agree. Just having support for growing (as per topic title) would have many benefits and would, in theory, no be very dangerous (famous last words). Totally understandable it’s quite some work. Btw even offline expansion would be nice, but online would be more user-friendly.

1 Like

To give a different perspective on the problem of using dd to install, what I do is put the initial 600mb partition after some other partition that I can grow (e.g. ext4 Linux partition, or even better Linux swap), then once I’ve booted the 600MB install I create a new partition and install to it from installer within the booted 600mb system. Then I can delete the 600mb one, and grow the previous partition. You can even shrink a partition by 600MB to make space for the “jump start” install, and then grow it again afterwards, or delete a swap partition and then reinstate it. It’s a bit involved, but it gets there in the end.

1 Like

I’m fully aware of that, but it is quite a workaround. I just want to dd that anyboot image to a partition and be done. There’s no reason at all why it doesn’t grow the filesystem. Also for USB sticks, it would be nice to grow the filesystem and have a live installation that can be upgraded and expanded using haikudepot (currently not enough disk space for that).

Close to success – I edited the BFS superblock on-disk using a hex editor (offline, can’t do it while running haiku/beos, the superblock is copied into memory and written back on shutdown), increasing both the number of blocks and allocation groups. Worked – could upgrade, install software. But somehow the new software wasn’t executable and after a reboot the system didn’t boot. Hanging on “Booting…” An checkfs and an additional makebootable didn’t help. So I guess just increasing those two numbers in the superblock are not safe. Bummer.

Oh yeah I totally agree, the workaround is a hack. Resizing support would make the whole experience way nicer/easier.

1 Like

I noticed there is no link to makebootabletiny.c in your guide, you can get it here

I wonder if there is a better place to keep this file?

Also: It’s possible to install haiku from Linux by copying files directly to a bfs partition using the fuse driver, but it’s even more involved Problems booting on a Sony Vaio P - #8 by Munchausen

1 Like

It doesn’t even have to be online, just being able to enlarge a BFS filesystem that is not mounted would be a big improvement. Boot Haiku from USB drive, enlarge the partition and the filesystem on the harddrive, and boot from the harddrive again. Of course online is better but I would be happy with the offline version.

1 Like

Why do people keep talking about makebootable in every situation where booting or filesystems are involved? You pretty much never need it.

If you use EFI, you never need it.
If you use Installer, you never need it.
If you use dd to copy the whole disk image to another disk, you never need it.

The one and only cases where you need makebootable:

  • Somehow you decided to not use Installer and copied your /boot/system partition to another partition manually (using Tracker, cp, …). I don’t see why anyone would do that
  • You used dd to copy a single partition to a different position on disk

If you didn’t do one of these two things, you don’t need makebootable.

If it was this easy, there wouldn’t be a branch with unfinished work sitting untouched for 8 years, don’t you think?

1 Like

I don’t see why this is a problem. When shrinking, you know whicp inode numbers are out of range for the new size, and you know these have to be remapped. This can be handled directly in the filesystem for the most part. The kernel can continue to use the old inode numbers and never notice anything.

That being said, it’s a good idea to finish and merge the “growing” part of bfsresize first, and see about the “shrinking” part later, because it is both more complex and less useful.

Sorry to trigger you, but I only said that I noticed there was no link to it in the article that mentions it (this one Installing a Haiku Image to a Disk Partition | Haiku Project). I didnt recommend it for anything in particular. So I cant see how your comment is a relevant reply to mine?

Incidentally Ive done both of these. For the first I linked it above. In that thread you mentioned that there are other options. You may do the second if you cant boot any install media and want to install onto a disk that has other OSs already on it.

Guess what this topic is about?

Maybe because shrinking was included, adding hugely to complexity?
Maybe because the SoC work was not integrated?
Maybe because of sarcastic devs?

I try to understand BFS basics. A quick hack on the superblock was something I felt worth trying. It seems to go wrong with allocation groups.

1 Like

In BFS you have the log, and the block bitmap at the beginning of each partition. Changing the partition will need to grow the bitmap, too, which means you always need to move inodes around, no matter if you grow or shrink the partition. So IOW the complexity of growing the file system is pretty much the same as shrinking it.


The complexity is undeniable. Maybe I’ll give it a try this winter (if I’ve got spare time).