Actually This is almost same as the process I applied and unfortunately gave me same results.
Anyways I tried to get GDB on my local machine and I am able to actually continue process using GDB debugger, but the problem now is whenever a segmentation fault occurs xfs_shell terminates entire process.
So when I try to reproduce segmentation fault for GDB to catch it, we can’t get any trace as xfs_shell terminates and GDB gives result as
Inferior 1 (process ___) exited normally.
If we could get xfs_shell to not terminate we can then get a backtrace.
I think I should ask this on mailing list to get other developers help as well.
I’m not familiar with how lldb works. It’s strange that gdb would not see the segmentation fault.
An option is adding code in xfs_shell to “ignore” the SIGSTOP signal and not exit when receiving it.
Another option is to start the process from inside lldb (using it’s run command) but I am not sure how easy it is to do this (since you need to replicate what “jam run” does). Maybe other developers can share their tips indeed.
In my case I run these things inside Haiku, and in that case, I can attach Haiku Debugger after the crash occurs, which is very convenient. I don’t know why this isn’t the default behavior in other systems.
Now to fix it, I checked the values of parameters we are passing in read_pos function and got one strange result : path[i].blockData = 0, remaining parameter values are fine with respect to other successfully read directories.
Another thought I have is why it doesn’t show us proper ERROR? Right now it simply gives segmentation fault.
Now we are passing DATA as parameter in this call which gives us our path as fPathForData .
There isn’t any fixed number of directories we can read successfully, sometimes it is 20K and sometimes just 1k.
But the cause of segmentation fault is same in every testing I did.
Anyone having Ideas on how can I fix this or maybe some more TRACE I should look into to get more clear picture about problem?
If you tell read_pos to read data from disk and store it at address 0, it will happily do so. But address 0 is not writable, so this will crash. Since read_pos cannot reliably know if an address is valid or not, it can’t catch this and return an error. Instead, it just crashes (before you can get to the error checks).
What seems strange to me in this code is that this buffer is allocated a few lines above:
path[i].blockData = new(std::nothrow) char[len];
However this is guarded by a condition:
if (path[i].type != 2) {
Overall this code seems to have a lot of checks and allocations/reallocations of this buffer.
So, first question is, what is the value of path[i].type when you get a crash? Then you can follow through the code in this function knowing which code path is supposed to happen. And check carefully how the blockData buffer size is computed and how the buffer is allocated.
Also, what does this type variable mean? Are these things just called “type 1” and “type 2” in the XFS spec, or do they hqve better names for it? If they have better names, why not use an enum matching the names?
So during my testing of XFS (now directly inside haiku!) I got one really strange bug.
When I try to mount file system to mount point (in my case it is directory named Testing) I can access all its entries from Terminal but when I try to open Testing folder through GUI it shows me no items.
It’s a missing feature in the XFS driver code. Tracker will get several things from the volume, not just the name, before it will display it. You can check logs from XFS (they will be in /var/log/syslog).
For example you can do this:
tail -f /var/log/syslog
And in another terminal:
open ~/Desktop/Testing
In particular, you may need to implement the function to get fs_info to return some values. Also check the behavior if the “.” and “…” directory entries (I don’t remember if it’s the responsibility of the filesystem or of the OS to manage them). Check for any function that you may not have implemented yet and that Tracker is trying to use, and see if you can implement them.
It doesn’t seem to show anything neither in shell nor in tracker, tried under 32 & 64 bit, did you have to create first a partition? coz i didn’t, i thought it would just work like it does under linux.
Yes that is a bug, which is really similar to the one I reported above just in the case of V5 file system we can’t list all entries in shell as well.
But again even though we can’t list all entries inside “.” directory we certainly can use “cd” command inside our mount folder (if you remember what directories are inside your XFS image) and then list entries inside that directory.
Syslogs for XFS indicates that lookup() function for “.” directory just continues even after successful read of all directories using “ls” command.
Really weird because something like this never occurred inside XFS shell.
Let us see if I can get more into this.
One out of topic question, is there some way I can reflect my changes of XFS code inside haiku without compiling and running anyboot image every time?
It takes a long time to compile.
You can create just a new haiku.hpkg and install that using pkgman install
You can compile only the XFS filesystem and put it in /system/non-packaged/add-ons/kernel/file_systems/xfs (in that case you can disable the one that is inside the haiku package to avoid any conflicts)