XFS file system testing

According to Tutorial — The LLDB Debugger it should be possible to do this:

  • First start lldb
  • Use the command process attach --name xfs_shell --waitfor

Then start xfs_shell using jam run

This will allow lldb to attach to the process right as it is starting, hopefully it will help.

1 Like

Actually This is almost same as the process I applied and unfortunately gave me same results.

Anyways I tried to get GDB on my local machine and I am able to actually continue process using GDB debugger, but the problem now is whenever a segmentation fault occurs xfs_shell terminates entire process.

So when I try to reproduce segmentation fault for GDB to catch it, we can’t get any trace as xfs_shell terminates and GDB gives result as

Inferior 1 (process ___) exited normally.

If we could get xfs_shell to not terminate we can then get a backtrace.

I think I should ask this on mailing list to get other developers help as well.

I’m not familiar with how lldb works. It’s strange that gdb would not see the segmentation fault.

An option is adding code in xfs_shell to “ignore” the SIGSTOP signal and not exit when receiving it.

Another option is to start the process from inside lldb (using it’s run command) but I am not sure how easy it is to do this (since you need to replicate what “jam run” does). Maybe other developers can share their tips indeed.

In my case I run these things inside Haiku, and in that case, I can attach Haiku Debugger after the crash occurs, which is very convenient. I don’t know why this isn’t the default behavior in other systems.

The backtrace is usually found in the syslog, if you need it.

I checked and there isn’t any backtrace.

I managed to get the function which is causing segmentation fault in reading B+tree based directories.

Basically I included lots of TRACE call and finally got where our code is breaking :slight_smile:

Its this function BPlusTree.cpp\xfs\file_systems\kernel\add-ons\src - haiku - Haiku's main repository.

Now to fix it, I checked the values of parameters we are passing in read_pos function and got one strange result : path[i].blockData = 0, remaining parameter values are fine with respect to other successfully read directories.

Another thought I have is why it doesn’t show us proper ERROR? Right now it simply gives segmentation fault.

I did some more testing to get that which function call to SearchAndFillPath is breaking code I found this BPlusTree.cpp\xfs\file_systems\kernel\add-ons\src - haiku - Haiku's main repository function call to be giving segmentation fault.

Now we are passing DATA as parameter in this call which gives us our path as fPathForData .

There isn’t any fixed number of directories we can read successfully, sometimes it is 20K and sometimes just 1k.
But the cause of segmentation fault is same in every testing I did.

Anyone having Ideas on how can I fix this or maybe some more TRACE I should look into to get more clear picture about problem?


If you tell read_pos to read data from disk and store it at address 0, it will happily do so. But address 0 is not writable, so this will crash. Since read_pos cannot reliably know if an address is valid or not, it can’t catch this and return an error. Instead, it just crashes (before you can get to the error checks).

What seems strange to me in this code is that this buffer is allocated a few lines above:

path[i].blockData = new(std::nothrow) char[len];

However this is guarded by a condition:

if (path[i].type != 2) {

Overall this code seems to have a lot of checks and allocations/reallocations of this buffer.

So, first question is, what is the value of path[i].type when you get a crash? Then you can follow through the code in this function knowing which code path is supposed to happen. And check carefully how the blockData buffer size is computed and how the buffer is allocated.

Also, what does this type variable mean? Are these things just called “type 1” and “type 2” in the XFS spec, or do they hqve better names for it? If they have better names, why not use an enum matching the names?


So during my testing of XFS (now directly inside haiku!) I got one really strange bug.

When I try to mount file system to mount point (in my case it is directory named Testing) I can access all its entries from Terminal but when I try to open Testing folder through GUI it shows me no items.

As you can see here I can list all entries inside terminal but Testing folder shows empty :frowning:

Is this behaviour due to something wrong with our XFS driver code or maybe haiku works this way?

Still I am happy to say we can mount XFS without any kernel log or hangs.

Here is successful read of block directories :



It’s a missing feature in the XFS driver code. Tracker will get several things from the volume, not just the name, before it will display it. You can check logs from XFS (they will be in /var/log/syslog).

For example you can do this:

tail -f /var/log/syslog

And in another terminal:

open ~/Desktop/Testing

In particular, you may need to implement the function to get fs_info to return some values. Also check the behavior if the “.” and “…” directory entries (I don’t remember if it’s the responsibility of the filesystem or of the OS to manage them). Check for any function that you may not have implemented yet and that Tracker is trying to use, and see if you can implement them.

1 Like

Tried to compile the xfs driver on a 64 bit got the folloxing error:

src/add-ons/kernel/file_systems/xfs/Extent.h:110:38: error: 'offsetof' within non-standard-layout type 'ExtentDataHeaderV5' is conditionally-supported [-Werror=invalid-offsetof]
  110 | #define XFS_EXTENT_CRC_OFF  offsetof(ExtentDataHeaderV5, crc)
      |                                      ^
src/add-ons/kernel/file_systems/xfs/Extent.cpp:294:16: note: in expansion of macro 'XFS_EXTENT_CRC_OFF'
  294 |         return XFS_EXTENT_CRC_OFF - XFS_EXTENT_V5_VPTR_OFF;
      |                ^~~~~~~~~~~~~~~~~~
src/add-ons/kernel/file_systems/xfs/Extent.h:111:41: error: 'offsetof' within non-standard-layout type 'ExtentDataHeaderV5' is conditionally-supported [-Werror=invalid-offsetof]
  111 | #define XFS_EXTENT_V5_VPTR_OFF offsetof(ExtentDataHeaderV5, magic)
      |                                         ^
src/add-ons/kernel/file_systems/xfs/Extent.cpp:294:37: note: in expansion of macro 'XFS_EXTENT_V5_VPTR_OFF'
  294 |         return XFS_EXTENT_CRC_OFF - XFS_EXTENT_V5_VPTR_OFF;
      |                                     ^~~~~~~~~~~~~~~~~~~~~~
src/add-ons/kernel/file_systems/xfs/Extent.cpp: In function 'uint32 SizeOfDataHeader(Inode*)':
src/add-ons/kernel/file_systems/xfs/Extent.h:112:41: error: 'offsetof' within non-standard-layout type 'ExtentDataHeaderV4' is conditionally-supported [-Werror=invalid-offsetof]
  112 | #define XFS_EXTENT_V4_VPTR_OFF offsetof(ExtentDataHeaderV4, magic)
      |                                         ^
src/add-ons/kernel/file_systems/xfs/Extent.cpp:464:53: note: in expansion of macro 'XFS_EXTENT_V4_VPTR_OFF'
  464 |                 return sizeof(ExtentDataHeaderV4) - XFS_EXTENT_V4_VPTR_OFF;
      |                                                     ^~~~~~~~~~~~~~~~~~~~~~
src/add-ons/kernel/file_systems/xfs/Extent.h:111:41: error: 'offsetof' within non-standard-layout type 'ExtentDataHeaderV5' is conditionally-supported [-Werror=invalid-offsetof]
  111 | #define XFS_EXTENT_V5_VPTR_OFF offsetof(ExtentDataHeaderV5, magic)
      |                                         ^
src/add-ons/kernel/file_systems/xfs/Extent.cpp:466:53: note: in expansion of macro 'XFS_EXTENT_V5_VPTR_OFF'
  466 |                 return sizeof(ExtentDataHeaderV5) - XFS_EXTENT_V5_VPTR_OFF;
      |                                                     ^~~~~~~~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors

Why is it not compiling for me?

It is because C++ generates this warning for using offsetof on non - POD classes.

What you can do is disable Werror for XFS code from here ArchitectureRules « jam « build - haiku - Haiku's main repository

Just comment this line and you will build XFS fine :slight_smile:

Got it.
Will work on this now.

If that’s needed, why not submit a patch to do it? It will help other people to test your changes.

Yes, though I was thinking about a way to fix this warning without disabling Werror.


That would be great, but until then the version in Haiku git repository should be compilable out of the box :slight_smile:


It doesn’t seem to show anything neither in shell nor in tracker, tried under 32 & 64 bit, did you have to create first a partition? coz i didn’t, i thought it would just work like it does under linux.

I think you mounted V5 file system.

Yes that is a bug, which is really similar to the one I reported above just in the case of V5 file system we can’t list all entries in shell as well.

But again even though we can’t list all entries inside “.” directory we certainly can use “cd” command inside our mount folder (if you remember what directories are inside your XFS image) and then list entries inside that directory.

No it works same way as it works in linux.

Yes indeed, okay, got it.

Syslogs for XFS indicates that lookup() function for “.” directory just continues even after successful read of all directories using “ls” command.
Really weird because something like this never occurred inside XFS shell.
Let us see if I can get more into this.

One out of topic question, is there some way I can reflect my changes of XFS code inside haiku without compiling and running anyboot image every time?
It takes a long time to compile.

Yes, there are several ways:

  • You can create just a new haiku.hpkg and install that using pkgman install
  • You can compile only the XFS filesystem and put it in /system/non-packaged/add-ons/kernel/file_systems/xfs (in that case you can disable the one that is inside the haiku package to avoid any conflicts)