Can a process query its (resident) memory size?

Is it possible for a Haiku process to query its resident memory size? On Linux you could do this by parsing /proc/self/statm. Practically every UNIX has its own API for that purpose.

Alternatively, I could work with the amount of memory allocated via malloc() if it had any efficient introspection functions. Unfortunately, at least malloc.h doesn’t seem to declare any.

How is malloc() implemented on Haiku anyway? sbrk(), mmap() or some BeOS-derived mechanism?

Currently malloc is sbrk-based, but we plan to change that.

You can use get_next_area_info to iterate over areas, that should allow you to compute the total memory usage.

Thanks! But one more question: Is get_next_area_info() a constant-time operation and is the number of areas in a typical UNIX program that allocates exclusively via the libc’s malloc() expected to be constant? And if yes, will it continue to be that way once malloc() is no longer sbrk()-based?

I have to poll the program’s memory size often, so anything whose time complexity is proportional to the amount of memory allocated (cf. Linux’ mallinfo()) won’t do.

Well you’d need to iterate all areas, so it’s linear to the number of areas.
For now this is constant (malloc uses a single area, and when it’s full your program runs out of memory). The new allocator may either attempt to resize its single area, or allocate more areas if the address space is too fragmented, I guess (but as long as it isn’t running, I can’t say for sure).

I’m not aware of a more efficient way currently.

What is your purpose for polling the apps’ memory for which a constant-time solution is needed?

Note that all memory in Haiku is in “areas”; so mmaped regions will also appear in area_info as well as standard malloc areas.

This is for my text editor SciTECO. It has the peculiarity that a single keypress by the user can allocate arbitrarily much memory which could eventually crash the program. And this could even happen by accident (eg. a typo). (It’s sort of an interactive programming language with undo, so any infinite loop will do that.) SciTECO therefore tries to limit the total amount of memory allocated (“memory limiting”). Of course there are POSIX resource limits, but despite being non-portable - I support non-POSIX platforms - they will only work if you have total control over all allocations (ie. what happens when malloc() returns NULL). Unfortunately, I use 2 libraries that do not cooperate with resource limits (libglib and Scintilla).

Over the past 8 years, I tried different strategies, including overloading C++'s new/delete operators, overwriting malloc() and using malloc_usable_size(), even replacing the malloc() implementation altogether… but so far using OS-specific APIs seems to be the least insane way to solve this.

Perhaps I’ll simply introduce a generic fallback for UNIX based on sbrk(0) - &end. This should work on Haiku for the time being. However, without malloc_trim(), hitting any memory limit is a show stopper even if you can detect it since there might be no way to recover from it.

I’m still unsure how to solve this once and for all.

No, it will not, as the system malloc uses its own internal sbrk and not the POSIX function itself.

How is this occurring? Isn’t it either (1) memory inside your “scripting language” (which you can track) or (2) memory allocated for Scintilla buffers, which will correspond closely to how many characters are in the edit view – and both of these are easily trackable without tracking overall malloc, right?

I can of course try to do as much as possible using custom allocators - that wrap malloc() - in order to track as much memory as possible. But it’s got its own disadvantages. It will be very imprecise, esp. with Scintilla. It’s not that trivial to approximate its memory usage. Lines can have attributes as well and there are styles, and undo tokens. I can’t be sure that my approximation won’t be off significantly from the real values. Unless I’ll try again to overwrite new/delete. But this won’t help without malloc_usable_size() which itself is unreliable. Or even worse, by including the size of your memory chunk at the beginning of every heap object. “Sized” allocators are only available with C++14 and turned out to be unsuitable for memory tracking as well.

So I’ll at least try to stick with my current approach. For the aforementioned reasons and practical problems, I also believe that an OS should provide that level of introspection to its processes. Most seem to do. Please note that POSIX even specifies getrusage() and some UNIXes define additional fields like ru_maxrss, so that might be a way to go on Haiku as well some day.

Haiku has mstats function that can be used to know memory size allocated by malloc: https://git.haiku-os.org/haiku/tree/src/system/libroot/posix/malloc_hoard2/wrapper.cpp#n570.

That doesn’t seem to be in any header, isn’t it? So I’d have to copy the struct and declaration into my sources. There is no guarantee this function won’t be removed or changed without prior notice. Might still be better to iterate all areas via get_next_area_info().

I am not sure, but comment above assume that this function came from BeOS, so it will be probably not deleted.

It do not give information about memory that was released by free() function.

Confirmed. So it’s more or less useless in my case without malloc_trim().
Unfortunately, mstats() doesn’t do anything, as the following test program demonstrates:

#include <stdio.h>
#include <stdlib.h>

struct mstats {
	size_t bytes_total;
	size_t chunks_used;
	size_t bytes_used;
	size_t chunks_free;
	size_t bytes_free;
};

struct mstats mstats(void);

int main(int argc, char **argv)
{
	void *p = malloc(1024*1024);
	struct mstats stats = mstats();
	printf("Total=%lu, Used=%lu, Free=%lu\n",
	       stats.bytes_total, stats.bytes_used, stats.bytes_free);
	return 0;
}

Returns 0 for everything. Perhaps, I should really instrument/wrap all allocations as a fallback, at least in the code I control… Won’t be precise but might at least prevent crashes on otherwise unsupported platforms.

By the way, I’m running the following Version:

> uname -a
Haiku shredder 1 hrev54154+111 Jun  7 2020 07:16 x86_64 x86_64 Haiku