RAID support in Haiku

Haiku does not have the required drivers for RAID support( vmd controller for intel processors ).
So I am looking into adding basic support for that. I would have to port the driver from freebsd.

  1. I don’t have experience with the Haiku kernel, Is porting the driver enough or is there a missing functionality I need to add before porting the driver?
  2. Where can I learn more about the Haiku kernel API( I am reading the docs in the source code for now) ?
3 Likes

This is the internals documentation: Kernel — Haiku internals documentation

That documentation is still very incomplete and may not be very helpful on its own in the current state.

You probably will need to study the existing drivers for hard disks and see how they operate.

Usually there is a low level “bus” driver that allows to send command to a disk, for example for IDE and SATA it is here: ata « busses « kernel « add-ons « src - haiku - Haiku's main repository (various implementations for different hardware interfaces).

Then on top of that there is a bus manager that knows which commands to send: ata « bus_managers « kernel « add-ons « src - haiku - Haiku's main repository

The core of it is executing SCSI commands: ATADevice.cpp « ata « bus_managers « kernel « add-ons « src - haiku - Haiku's main repository

Finally that allows the SCSI stack to manage the device:

https://cgit.haiku-os.org/haiku/tree/src/add-ons/kernel/generic/scsi_periph/device.cpp

So you have to decide where to hook in that whole thing.

Does your RAID device accept SCSI commands? Does it need a separate bus_manager or is there only one single standardized interface? For example the NVME driver bypasses all of this and implements directly the disk interface. In that case, you can have a look at the virtio_block driver for a very simple one: virtio_block.cpp « virtio_block « virtual « disk « drivers « kernel « add-ons « src - haiku - Haiku's main repository

Let’s look at this one closely, you can start reading at the end of the file where there are two structures with pointers to “hook functions” that implement the driver. There is one structure for the driver, that is tasked with detecting if there is any supported device on the system and attaching to it. Then there are the device hooks, which are related to operations on the device itself (reading and writing blocks as well as handling of the ioctl function).

The IO goes through the IO scheduler, which means eventually all of it ends up in the do_io function. You can probably find a way to rewire FreeBSD code there in some way.

3 Likes

The vmd driver should actually act as a PCI bus manager. So the disk driver stuff doesn’t apply.

I did a little digging and here’s what I understood:

  • the vmd controller is a pcie device
  • While in vmd mode, doing lscpi -Dtv on linux shows the following output
-[0000:00]-+-00.0  Intel Corporation 12th Gen Core Processor Host Bridge/DRAM Registers
           +-01.0-[01]----00.0  NVIDIA Corporation GA107M [GeForce RTX 3050 Ti Mobile]
          (some stuff)
           +-0e.0  Intel Corporation Volume Management Device NVMe RAID Controller
          (more stuff) 
-[10000:e0]-+-06.0-[e1]----00.0  SK hynix Platinum P41/PC801 NVMe Solid State Drive
            \-06.2-[e2]----00.0  Micron Technology Inc 2550 NVMe SSD (DRAM-less)

Notice that the domains of the disks are different( 10000 here ) whereas in NVMe mode( no vmd ), the domain for all devices is 0000. I am not sure what this means but stackoverflow suggests the vmd controller could be acting like a host bridge or a root complex. more likely a root complex. I will check how the FreeBSD implementation does this soon.

Correct me if I am wrong but the PCIe device enumeration is done by the BIOS, so maybe Haiku is unable to detect the disks in vmd mode because their domain is different?

The vmd driver should actually act as a PCI bus manager. So the disk driver stuff doesn’t apply.

Is bus manager the same as root complex?

You can probably see the device in haiku in listdev or device preferences. So, detecting it at the PCI level is not a problem. But then there is no driver handling it, so nothing else happens.

The drivers can run arbitrary code to match with devices, typically they will either check for specific vendor and device id, or check for a specific device type (such as “ahci sata controller” or “raid disk controller”).