New memory technology and Haiku package system

Just find out about "Intel Optane”:

This is incredible, and actually I expected that this will hapen eventually…
One type memory, no more RAM and ROM differentiation!
And I have a question:
If that will be a problem to a Haiku packake system? Well, if there will be need to some kind of reverting to plain fs entry of data?
If there will be better choice to store system data like is done in GoboLinux? (each version of app or lib or something has it’s own directory)?

I don’t see it as revolutionary is you do, but i told the same about the 3d accelerator cards plenty years ago, so i suppose it is better if i shut up.

Why would the Haiku packages makes any problem? It is just data.

Oh Haiku every package have own directory. Just check the package-links folder.

Haiku packages are archived in some way, yes? It is ok when archive is loaded to RAM from ROM, but it is more job on new type of memory (RAM+ROM), because data can not be accessed directly from package, it must first be rewritten in some cache as expanded data.
If packages can be written in not archived form into this new type of memory and system can access them as plain files then, more quickly and direct. Can be Haiku package be written as some special sort (or not) of not archived directory with not archived files for system to accsess?
It is how I see situation, but I do not fully understand how packages work, maybe I am wrong in some way.

Heh, and I do not talking here particularly about Intel Optane product, I talking about new type of memory (RAM+ROM) and how will Haiku packages work there. Intel Optane just firs sign of new trend in computer memory technology.

It isn’t available yet, and Haiku not targeting latest-newest technology.
But patches welcome.

Isn’t this just a very fast SSD?

What we could do, assuming that:

  • When installing packages, we should decompress them. Currently our packages are compressed with zlib, and if you have a reasonably fast SSD (even today), it takes more time to decompress than it would to load the raw data from the SSD. So, decompressed packages would use more space, but would load faster.
  • We could investigate removing the block cache from Haiku. The block cache is something that allocates free RAM on the system to storing recently used or soon-to-be-used sectors from partitions and disks, so that the next access to them is very fast. Such a system would not be needed, if the disk itself is also instant-access. However, the block cache is also used to store data to be written, and I’m not sure Optane reach write speeds as fast as RAM.
  • We should however keep the file cache, which store recently accessed/soon-to-be accessed parts of files. This one is there to avoid filesystem overhead. And one of the weakness in the current Haiku design is that the file and block cache are independant and compete for use of free RAM. This is what the “unify filesystem caches” GSoC idea is all about, but no student has applied to work on it for several years :frowning: .

Finally, it’s quite possible that RAM manufacturer reply to this by doing even faster RAM, and as a result the storage will always lag behind RAM speed. So maybe we shouldn’t do anything and wait a little to see how things happen.

1 Like

I think it is good to be prepared for both possibilities, also it is possible that in near future will be available different systems, traditional with RAM and ROM, also it is possible that will be some sort of systems with RAM+ROM and very fast RAM will be used in them as CPU cache only, of cause some access to peripherical ROM will be available also.

I think the major difference is that it’s possible that RAM and storage media will merge (even before memristor become an industrial reality) and, though, OS should manage themselves which part is used as RAM and which is used as storage (with partition and file systems). The latest PCIe NVMe SSD shows speeds that are not that far from RAM…

And it could lead to a big paradigm switch, as the current memory state could be considered also as a persistent storage and thus lead to a way different design in what is considering “saving” a document process.

You still have the problem of having to delete at least an entire sector just to change a single bit when using flash memory, and this still affects devices like this AFAIK (they are still NAND flash at the end of the day), which implies reading at least an entire sector and writing it back. Even if this process is hidden the cost will still be there. So this is a serious problem if you wanted to use these devices like RAM. Perhaps this is why the Intel software purportedly learns your most frequently used applications - the content of the flash drive doesn’t actually change very often, as that is expensive (wrt time), but the read speed for those frequently used things is very very good so it still makes sense from that point of view.

Well, actually, SDRAM and DDR SDRAM are already pretty bad at random access. They are usable in modern computer architectures only because there are various CPU caches and the random access is smoothed out by these (3 levels of cache, with sizes up to several megabytes).

So essentially this makes the RAM a 4th level of cache and implements permanent storage in flash at the 5th level. Which is what everyone is already doing in software, anyway.

Well sure, but the issue of only achieving maximum speed with contiguous reads, and various latencies etc with SDRAM is another issue again compared to erase blocks in flash memory. With SDRAM you can still overwrite single memory locations without having to also overwrite much larger areas, even if you are wasting some time doing so, and prefetch is also still far smaller than typical flash memory block sizes.