[XFS] Global Structures completed and tested

Current Status: I’ve worked with the xfs_shell and the xfs Images on my Linux system for some previous days. During my work, I have got a fairly good hold of the On-Disk Format of the metadata Structures of FreeSpaces Inode Allocation and how The XFS organizes Data, and CRC Validation I have learned a lot about this Project, it’s getting more and more interesting.
Approach: Instead of Implementing the Structures like the FreeSpace B+ trees, Inode trees. I’m thinking of Implementing the Read Support for these Structures and necessary Checks and Trying for Allocation of Space and Inodes from these structures so we can create directories in the XFS drives by this method, I’m thinking of some Dummy Allocation first, then optimize this for the rest of the FileSystem. Is this approach good? Suggestions are Welcome. Here are some of the screenshots from my work in the XFS shell.



3 Likes

There are multiple steps to creating a directory in a filesystem:

  • Finding some free space to store the new data
  • Reserve the space so that it is not marked free anymore (otherwise something else could allocate the same space and overwrite our data)
  • Write the new content to the allocated place
  • Register the newly allocated file or directory by adding it to its parent directory

All of this, of course, while ensuring filesystem consistency by using the filesystem journal as needed.

So, yes, all these steps need to be implemented one by one.

Also, when doing everything with direct disk access, it will be quite slow, since the data will be read from disk over and over again. It may be a good idea to set up caches. There is a work in progress change for it: https://review.haiku-os.org/c/haiku/+/5765 but it was never completed. Maybe it’s a goood idea to integrate it in your plan, since, the more changes you make, the harder it will be to reuse this existing work.

2 Likes

okay. I’ll try to integrate it. I don’t have much idea regarding caches and their implementation. Can you suggest some resources regarding this?

1 Like

There are already some comments in that Gerrit change giving an overview of how it works. Unfortunately I don’t know much more about it.

The Haiku Book: File System Modules has last section which talks about caches.
See if could help.

2 Likes

Currently, I’m working on the FreeSpace B+ trees. I want more than 1 level in this tree and to get a good version of internal nodes but unfortunately, the XFS driver can’t read 10K directories. There is some problem with the B+ tee lookup. Furthermore, I came across this discussion https://discuss.haiku-os.org/t/xfs-file-system-testing/12094/22?u=priyanshu this is from GSOC 2022 contributor @Mashijams highlighting the same problem. He did good work Tracing the root cause of the problem in the Searchandfillpath() function. I tried to look into that too and follow your suggestion on the process and Trace the value of path[ i ].type and it’s giving the correct result. The function is showing unusual behavior it’s working fine for some iterations.

for (int i = 0; i < MAX_TREE_DEPTH && level >= 0; i++, level--) {
		uint64 requiredBlock = B_BENDIAN_TO_HOST_INT64(*ptrToNode);
		TRACE("pathBlockno:(%d)\n", path[i].blockNumber);
		TRACE("pathtype:(%d)\n", path[i].type);
		TRACE("requiredBlock:(%" B_PRIu64 ")\n", requiredBlock);
		if (path[i].blockNumber == requiredBlock) {
			TRACE("here1");
			// This block already has what we need
			if (path[i].type == 2)
			{
				TRACE("here2");
				break;
			}

Here the for loop above breaks and no value outside the loop is updated we are just returning the status Still before coming outside the loop this gives a Segmentation fault. Do you have any suggestions for this?

B+Tree source code here is messy to say the least.
Lots of memory consumption and leaks are there.
When running in xfs_shell the memory required exceeds memory available which causes this weird problem of listing directories unknown numbers of directories.

If you are gonna run same image directly inside haiku, it could work but it will be very very slow again due to lots of disk reading and memory allocation.

1 Like

Can you suggest something that can be done to resolve this? Implementation of Cache is going to take time and I still do not have much idea regarding this implementation can something be done to clean the B+ tree source code?

Studying BFS, its implementations can help:

1 Like

Is this helpful?

http://www.amittai.com/prose/bplustree_cpp.html

https://github.com/niteshkumartiwari/B-Plus-Tree

1 Like

That’s why this kind of project is not yet done and instead turned into a GSoC project where you can spend several hundred hours on it.

If there are memory leaks, identify and fix them. This probably means deciding on a consistent allocation strategy (defining hwo is responsible for freeing the memory, and how the code keeps track of it) and possibly avoiding using hand-crafted management with new/delete, using, for exemple BReference (a reference counting system) or AutoDeleter (a C++ RAII based system to automatically release resources when a function is exited).

I have tried to work with cache and saw the suggestion in the discussion https://review.haiku-os.org/c/haiku/+/5765. Here you suggest replacing the read_pos call with the cache_get(); I have done the same and implemented the Cache_get_etc() for reading the direct offset in case of reading inodes. Now I want to replace the read_pos call in Inode::GetFromDisk where we fill the buffer. I have replaced read_pos with this but it’s giving segmentation faults can you suggest where I’m making a mistake or how to correctly replace them with Cache, Any references?

	if (Cache->SetTo(block) == nullptr) {
		ERROR("Inode::Inode(): IO Error");
		return B_IO_ERROR;
	}

	const uint8* block_data = Cache->Block();
	memcpy(fBuffer, block_data + offset* len, len);

It’s hard to debug or review just 6 lines of code from a much bigger piece of software.

Please send your changes on Gerrit, so it can be reviewed in full.

Still here are some things to check:

  • That Cache is not null at that point and is pointing to a valid cache object
  • That offset * len does not go past the end of the block you got from the cache. If the offset or len is too large, you may be reading past the data that the cache can provide you?

Thank you for the Suggestion. Sorry for this silly mistake, I have resolved the error now I’m thinking of Changing the inode’s read_pos and then submitting a change regarding this and I still have to work with Cache_get_etc.

Hii all,
I have tried to implement Block Cache and they are working fine inside the xfs_shell now I want to test them inside the haiku itself. I have compiled haiku with the xfs in the image definition and created the image definition using :

dd if=/dev/zero of=fs.img bs=1 count=0 seek=5G

after this the /sbin/mkfs.xfs fs.img isn’t working and while trying to mount this image on the mounting point which is the testing folder, I’m getting this.


@Mashijams Can you help me with this issue?

Ahoy @Priyanshu ,

I’m not a developer, but seems your Haiku install is corrupted - the ladybird on the ‘VBOX HARDDISK’ indicades it-

Try checkfs -c /boot in Terminal - it may show the error

without -c switch the checkfs command attempt to fix your bfs partition.
I would do a reboot after that.
If it had not fixed, you need a new install of Haiku in the VirtualBox or if you thinked about it and you have a spare copy of your virtual drive … then you can make a copy about it onto the original one … overwritten.
It is useful to copy the virtual drive time-to-time – creating a “snapshot” about it manually as then you can restore a healthy status of your drive without too much struggle to keep the needed at your hands healthy and reliable.

Ah of course, the copy of the virtual drive (the file) should be done while the affected guest OS is shutted down, so the drive does not changing.

1 Like

the checkfs -c /boot isn’t giving any error I have also run it without -c as you suggested but it’s still providing the same result. I’ll try to follow up with the other suggestions as well.

Well, it is not like message

"You have problem here and here "

BUT

you have to look at what reports the checkfs- in details

    15870 nodes checked,
    0 blocks not allocated,     <-- OK
    0 blocks already set,       <-- OK
    0 blocks could be freed  <-- if it is not '0' , 

then your BFS should be cleaned

using checkfs without -c , so not check-only what it means exactly.

e.g.
… after a ‘Force reboot’ on Team monitor panel {CTRL+ALT+DEL}

that skip housekeeping _
write back caches content
and close things as regularly happens …
that ALWAYS write out things from RAM to DISK area before shutdown or close app(s).
However force NEEDED as not applicable the regular way of handling

    direct block runs               16827 (17.02 GiB)
    indirect block runs             1167 (in 48 array blocks, 62.34 GiB)
    double indirect block runs      0 (in 0 array blocks, 0 bájt)


–>Sum the first 2 lines should be equal with FS size which already allocated so size which reported in df command output as FREE minus the TOTAL

→ and double indirect block runs should be zero as once someone warned me, … I assume that means inodes are OK as well, there is no cross-linked and other issues there

79.34 GB, so cca 80GB

~> df -h
Mount Type Total Free Flags Device
/boot bfs 119.0 GiB 39.5 GiB QAM-P-W /dev/disk/usb/0/0/1

119 - 39.5 = 79.5 , so cca. 80GB

It equals.

~> bfs_clean_boot.sh 

   Executing -   checkfs -c /boot   - to find possible/real BFS filesystem erros of installed Haiku operating system ... 

        15870 nodes checked,
        0 blocks not allocated,
        0 blocks already set,
        0 blocks could be freed

        files           13630
        directories     1415
        attributes      459
        attr. dirs      319
        indices         47

        direct block runs               16827 (17.02 GiB)
        indirect block runs             1167 (in 48 array blocks, 62.34 GiB)
        double indirect block runs      0 (in 0 array blocks, 0 bájt)

   Executing -   checkfs /boot  -  to attempt to fix (possible or existing) filesystem errors ...  

        15870 nodes checked,
        0 blocks not allocated,
        0 blocks already set,
        0 blocks could be freed

        files           13630
        directories     1415
        attributes      459
        attr. dirs      319
        indices         47

        direct block runs               16827 (17.02 GiB)
        indirect block runs             1167 (in 48 array blocks, 62.34 GiB)
        double indirect block runs      0 (in 0 array blocks, 0 bájt)

   Re-executing -   checkfs -c /boot  -  to validate fix attempt result(s) 

        15870 nodes checked,
        0 blocks not allocated,
        0 blocks already set,
        0 blocks could be freed

        files           13630
        directories     1415
        attributes      459
        attr. dirs      319
        indices         47

        direct block runs               16827 (17.02 GiB)
        indirect block runs             1167 (in 48 array blocks, 62.34 GiB)
        double indirect block runs      0 (in 0 array blocks, 0 bájt)

 
 BFS filesystem check / possible fix / re-check -- DONE. Bye-bye !.. 

~> df -h 
 Mount             Type      Total     Free      Flags   Device
----------------- --------- --------- --------- ------- ------------------------
/boot             bfs       119.0 GiB  39.5 GiB QAM-P-W /dev/disk/usb/0/0/1
/boot/system      packagefs   4.0 KiB   4.0 KiB QAM-P-- 
/boot/home/config packagefs   4.0 KiB   4.0 KiB QAM-P-- 
/boot/system/var/shared_memory 
                  ramfs             0  28.9 GiB QAM-PRW 
~> 

I suggested to reboot whether the ladybird bug disappears from yout system disk icon or remains after checkfs.

Me had not - I had to reinstall Haiku 64 bit
But I HAD further error message in checkfs output that had not disappeared to me.

You can read some additional about it in this threads of post HERE

1 Like

Of course your mentors can give better advice than me ,
for your corrupted Haiku install
and
the affected issue you are facing with
after your corruption solved.
I just indicated that waste of time and effort IF your Haiku itself is not healthy

1 Like

thank you @KitsunePrefecture for this suggestion I’ll try to look into it :slight_smile: