Compressed page cache?

So we have a block cache like most OSs… but what about a global compressed page cache and or block cache? Where Memory that is not in use is not paged back to disk but is compressed in memory. And paged to disk as a last resort (this would also speed up writes to the page file since all writes and reads would be compressed).

It seems like it would make sense for desktop usage as it would reduce the IO load on disks under heavy load as well as potentially reduce wear from writes on SSDs…

perhaps certain uses of memory could be tagged as compressible as well… for instance read only data.

Packages might not benefit much since they already are compressed but there could be some benefits… in that area perhaps. Certainly desktop OSs stand to benefit from judicious reduction of IOP load on hard drives during cache thrashing and this would also potentially prevent some thrashing to begin with.

There is an implementation of compressed page cache for NetBSD here https://github.com/vnaybhat/ccache

And of course Linux has several advanced subsystems for compressed cache.

1 Like

It may not increase performance this much, because:

  1. The compressed cache uses memory, which is then not available to use for other things
  2. Compressing and decompressing takes CPU resources and is actually not that fast (unless you are paging to a plain old spinning hard disk, or a slow SD card)

So if you have an SSD, this is actually making things slower.

And the wear on SSDs will take 10 years or more before it starts giving any sign of problems, so they will last longer than most spinning hard disks (especially in laptops which really are not the best place to put a spinning disk in). There isn’t much to worry about here.

1 Like

If are just going to dump on an suggestion with no data to back up your claims you’re better off cruising right on by… it of course won’t add much on systems not hitting swap limitations but as soon as you do it will. Android has used it for quite awhile to aid performance on lower end devices… CPU performance typically outstrips disk performance by large margins, even a sata SSD probably can’t match a compressed cache, not to mention if you arne’t writing to disk you can actually use the disk for other things, disk IOPS are a finite resource even on NVMe drives. Unless you haven’t noticed having many idle cores sitting around is a real thing now and it is here to stay… 6 and 8 core mobile CPUs are on the virge of becoming commonplace. AMD should be able to slot a 35-45W 8 core CPU into their mobile lineup in the next refresh.

Nope, there is now research showing the exact opposite is true. I don’t really agree with all the conclusions of that paper necessarily, though…

Yes, but they are usually so large that the CPU is basically “never” waiting for them. The data really does not support this conclusion. Software has spent 30 years under the assumption that I/O is slower than CPU, and programmed around it. Now that the tables have turned, we can actually ease up on the caching a bit; but certainly there’s no reason to think we need more convoluted strategies, especially ones that burn CPU and thus power for no real reason.

1 Like

I’m thinking surely this relates somehow to why the Haiku forum is one of the slowest forums I’ve ever been on but… I give up lol, and the forum seems to do so sometimes too lol. Note: this is a poor attempt at humor after my last message timed out.

Anyway this could all be a valid direction to go in as well… 4GB or more of ram isn’t all that uncommon on phones even these days. And laptops typlically have 16GB or more unless you buy a potato.

That said many people still build PCs around spinning rust… just because it is so cheap roughly half or a 1/3 of even the cheapest bulk SSDs. So I don’t think it is a direction that desktop OS’s should assume even if they can offer more performance on such setups.

Yeah, some sort of fluke, I was seeing that too.

Sure, but even if compressed page caches are that much of a benefit (I’d need to see numbers; and as you already noted packagefs already does a bunch of work for us here), we are nowhere near implementing something as complex as that. There is a huge amount of low-hanging fruit with vastly higher impacts (e.g. see my commit from today that increases inode performance by 10x to 200x) to be picked long before we would even put that on a roadmap.

And as you said before , NVMe drives would benefit less… even to the point where you don’t even want to be using interrupts for IO, it’s becomes faster to just poll the drive since it replies so fast.