I started playing around with the ramdisk. Well… becuase I’m an old Amiga user and we love our RamDisk.
I noticed that it doesn’t seem that the Ram is freed after deleting files from the Ram Disk. But I’m not 100% certain I am reading things right. Thus the requst for a 2nd opinion before opening a ticket. I did a search to see if this was reported, but I didn’t find anything.
Clean Boot. No programs opened.
Add a bunch of large files to the RamDisk.
Delete files from RamDisk
Even after deleting the files the Ram Usage remains high with no software running. I forgot to take a screenshot. But, I also empied the trash and checked the memory again. It did go down some. But again I am not super familur with the ProcessController app and I may not be reading it right.
I did find that somewhere on the Forum after I did this post. I used fstrim /RAMDisk and it did clear some memory but I was still left with over a gig of Ram being used on a system with no applications running. Well except for LaunchBox.
“real” vs “virtual” memory (i.e. physical RAM vs. stuff stored on disk until it’s needed again)
whatever equivalent BeOS/Haiku might have of memory which is/is not paged out
the memory being “held” by something that handles the ‘backing’ of the RAMDisk.
If fstrim had any affect at all, then that would suggest that the RAMDisk may retain allocated memory even if it isn’t currently in use. E.g. If you open a file that’s 1 GB in size, then closing it doesn’t actually shrink the RAMDisk leading to growth in the size of the RAMDisk until something comes along to clean things up.
You could try the steps you did again, but this time note the change after each step relative to removing a single file rather than all of them.
ramdisk always reserves however much memory you tell it to use. So, if you have a total of 5GB of RAM and swap, and you create a 1GB ramdisk, you now have 4GB left for everything else to use, whether or not the ramdisk is actually using that memory or not. (I don’t know if we actually have a clear way to see how much memory is available vs. reserved at any given time, we generally only display “in use” memory, indeed…)
That’s not right, physical memory is allocated on-demand as things are read or written to the disk, and can be released using fstrim.
Even with an empty disk, there will be some space allocated because you still need to store the BFS data structures, however. So you can’t get the ramdisk down to 0 bytes once it is formatted.
It would be more efficient to use a ramfs (a filesystem directly designed to run in RAM, instead of using a block device that works like a disk, and then adding a filesystem on top of it). But currently we don’t have a working one available in Haiku.
I’m not sure what you mean by two columns here. On the DeskBar there is only one (the other columns are for CPU usage). In the menu with detailed meters, we show in dark blue memory that is really used, and in light blue, memory that is used for caching, and can be released if needed to make space for other things. Having a lot of light blue is generally a good thing: your memory isn’t sitting there completely unused, yet it can be made available for other things if you suddenly start several memory-hungry applications.
I said “reserved”, not allocated. (“Reserved” means that the pages have not been actually handed out, but there is a guarantee there will be some when they are needed.) Or, does the ramdisk not actually reserve pages? That would be strange, it would mean you could get data loss if you ran out of memory…
I spent some time on the in-tree one before and got it partially working. It has a few known problems at present: timestamps are broken, and live notifications do not seem to work. Already I refactored it to run in the kernel, use physical pages and not virtual memory, and fixed some crashes.
It allocates pages on demand. Initially it does not allocate anything, and pages are allocated when you first write to them. If you try to write and it runs out of memory, the write just fails. There is no need to reserve things because it does not map pages in the memory space and use faults to allocate them (which is how malloc works). Instead it uses the vm APIs to directly access physical memory (there is no need to waste gigabytes of address space for a ramdisk).
As a result, error handling can be done in a safe way, it will not crash at random places like overcommitted malloc memory would. It will just have some function return an error.
The only case where this could create problems is if you had a block or file cache in use (the applications may think their write is complete, and later on it turns out to not be possible), but why would you use a block or file cache on a ramdisk? So all operations should be synchronous (I don’t know if BFS ensures this), and if a write fails, the write() system call can just tell that to the application which tried to write. Which in turn can report the problem to the user, and the user can try to save their data elsewhere, or free up some space and try again.
Anyway, you can run into similar scenarios with normal mass storage (say if the user unplugs an USB drive while something is trying to write to it), it is just something the upper layers should be prepared for, and handle correctly to avoid data losses (which is not easy, but the ramdisk does not really introduce any new problems here).
It can still do all of these things and reserve physical pages so that it has a guarantee when the time comes, it will be able to get pages for writes.
Indeed it seems the ram_disk at present reserves and then immediately allocates the pages. It could do this separately, i.e. reserve the pages upon creation, and then allocate them on write. That might be better as it would guarantee a ramdisk of whatever size would be able to actually have that size.