I still have an older thread to catch up on when I get a chance, but it so happens that I did all the calculations Earl is interested in some time ago, and I can dig them out.
On “large” (by BeOS standards, ie mid-1990s) disks the BFS allocation group size is always 65536 fs blocks. So this will always be true for any modern disk or a reasonable sized partition of such a disk.
Each BFS file has 12 direct runs, an indirect run and a doubly-indirect run. Each direct run can point to one contiguous portion of an allocation group with a 16-bit length. So long as there is little or no fragmentation this will be 65535 fs blocks.
So for 1024 byte blocks the direct runs add up to 805294080 bytes, and with 8192 byte blocks it is 6442352640 bytes
The indirect run consists of a run of 4 contiguous blocks from an allocation group, which in turn contains more block runs like the 12 direct runs. For 1024 byte blocks 512 such runs will fit, for 8192 byte blocks it’s 4096 runs. So the total from this part of the file’s allocation is up to 34359214080 bytes for 1024 byte blocks and 2198989701120 bytes with 8192 byte blocks.
However the doubly indirect run consists of a run of 4 contiguous blocks from an allocation group, but which in turn contains block runs that are themselves pointing to 4 contiguous blocks from an allocation group, and then those in turn each contain a series of block runs of exactly 4 blocks that are used for data. So for 1024 byte blocks there are 262144 such data runs, giving a total of 1073741824 bytes, and for 8192 byte blocks it is 16777216 runs totalling 549755813888 bytes.
( If any of the intermediate structures cannot be allocated due to fragmentation, growing the file fails, although plenty of space may remain for other operations. )
So that comes to 33.7 GiB for 1024 byte blocks and 2.5 TiB for 8192 byte blocks.
For today’s largest single SATA drives this isn’t necessarily enough to make a disk image even with the 8192 byte block size. It probably makes sense just to mention it in the documentation as a limitation and leave the default block sizes alone.
This is quite a contrast from a traditional Unix filesystem where the bulk of a very large file is in the double (or triple) indirect blocks, in BFS the double indirect doesn’t actually achieve very much except blunting the impact of mild fragmentation.