Isn't this just a very fast SSD?
What we could do, assuming that:
- When installing packages, we should decompress them. Currently our packages are compressed with zlib, and if you have a reasonably fast SSD (even today), it takes more time to decompress than it would to load the raw data from the SSD. So, decompressed packages would use more space, but would load faster.
- We could investigate removing the block cache from Haiku. The block cache is something that allocates free RAM on the system to storing recently used or soon-to-be-used sectors from partitions and disks, so that the next access to them is very fast. Such a system would not be needed, if the disk itself is also instant-access. However, the block cache is also used to store data to be written, and I'm not sure Optane reach write speeds as fast as RAM.
- We should however keep the file cache, which store recently accessed/soon-to-be accessed parts of files. This one is there to avoid filesystem overhead. And one of the weakness in the current Haiku design is that the file and block cache are independant and compete for use of free RAM. This is what the "unify filesystem caches" GSoC idea is all about, but no student has applied to work on it for several years .
Finally, it's quite possible that RAM manufacturer reply to this by doing even faster RAM, and as a result the storage will always lag behind RAM speed. So maybe we shouldn't do anything and wait a little to see how things happen.