Can a process query its (resident) memory size?

Is it possible for a Haiku process to query its resident memory size? On Linux you could do this by parsing /proc/self/statm. Practically every UNIX has its own API for that purpose.

Alternatively, I could work with the amount of memory allocated via malloc() if it had any efficient introspection functions. Unfortunately, at least malloc.h doesn’t seem to declare any.

How is malloc() implemented on Haiku anyway? sbrk(), mmap() or some BeOS-derived mechanism?

Currently malloc is sbrk-based, but we plan to change that.

You can use get_next_area_info to iterate over areas, that should allow you to compute the total memory usage.

1 Like

Thanks! But one more question: Is get_next_area_info() a constant-time operation and is the number of areas in a typical UNIX program that allocates exclusively via the libc’s malloc() expected to be constant? And if yes, will it continue to be that way once malloc() is no longer sbrk()-based?

I have to poll the program’s memory size often, so anything whose time complexity is proportional to the amount of memory allocated (cf. Linux’ mallinfo()) won’t do.

Well you’d need to iterate all areas, so it’s linear to the number of areas.
For now this is constant (malloc uses a single area, and when it’s full your program runs out of memory). The new allocator may either attempt to resize its single area, or allocate more areas if the address space is too fragmented, I guess (but as long as it isn’t running, I can’t say for sure).

I’m not aware of a more efficient way currently.

What is your purpose for polling the apps’ memory for which a constant-time solution is needed?

Note that all memory in Haiku is in “areas”; so mmaped regions will also appear in area_info as well as standard malloc areas.

This is for my text editor SciTECO. It has the peculiarity that a single keypress by the user can allocate arbitrarily much memory which could eventually crash the program. And this could even happen by accident (eg. a typo). (It’s sort of an interactive programming language with undo, so any infinite loop will do that.) SciTECO therefore tries to limit the total amount of memory allocated (“memory limiting”). Of course there are POSIX resource limits, but despite being non-portable - I support non-POSIX platforms - they will only work if you have total control over all allocations (ie. what happens when malloc() returns NULL). Unfortunately, I use 2 libraries that do not cooperate with resource limits (libglib and Scintilla).

Over the past 8 years, I tried different strategies, including overloading C++'s new/delete operators, overwriting malloc() and using malloc_usable_size(), even replacing the malloc() implementation altogether… but so far using OS-specific APIs seems to be the least insane way to solve this.

Perhaps I’ll simply introduce a generic fallback for UNIX based on sbrk(0) - &end. This should work on Haiku for the time being. However, without malloc_trim(), hitting any memory limit is a show stopper even if you can detect it since there might be no way to recover from it.

I’m still unsure how to solve this once and for all.

No, it will not, as the system malloc uses its own internal sbrk and not the POSIX function itself.

How is this occurring? Isn’t it either (1) memory inside your “scripting language” (which you can track) or (2) memory allocated for Scintilla buffers, which will correspond closely to how many characters are in the edit view – and both of these are easily trackable without tracking overall malloc, right?

I can of course try to do as much as possible using custom allocators - that wrap malloc() - in order to track as much memory as possible. But it’s got its own disadvantages. It will be very imprecise, esp. with Scintilla. It’s not that trivial to approximate its memory usage. Lines can have attributes as well and there are styles, and undo tokens. I can’t be sure that my approximation won’t be off significantly from the real values. Unless I’ll try again to overwrite new/delete. But this won’t help without malloc_usable_size() which itself is unreliable. Or even worse, by including the size of your memory chunk at the beginning of every heap object. “Sized” allocators are only available with C++14 and turned out to be unsuitable for memory tracking as well.

So I’ll at least try to stick with my current approach. For the aforementioned reasons and practical problems, I also believe that an OS should provide that level of introspection to its processes. Most seem to do. Please note that POSIX even specifies getrusage() and some UNIXes define additional fields like ru_maxrss, so that might be a way to go on Haiku as well some day.

Haiku has mstats function that can be used to know memory size allocated by malloc: https://git.haiku-os.org/haiku/tree/src/system/libroot/posix/malloc_hoard2/wrapper.cpp#n570.

That doesn’t seem to be in any header, isn’t it? So I’d have to copy the struct and declaration into my sources. There is no guarantee this function won’t be removed or changed without prior notice. Might still be better to iterate all areas via get_next_area_info().

I am not sure, but comment above assume that this function came from BeOS, so it will be probably not deleted.

It do not give information about memory that was released by free() function.

Confirmed. So it’s more or less useless in my case without malloc_trim().
Unfortunately, mstats() doesn’t do anything, as the following test program demonstrates:

#include <stdio.h>
#include <stdlib.h>

struct mstats {
	size_t bytes_total;
	size_t chunks_used;
	size_t bytes_used;
	size_t chunks_free;
	size_t bytes_free;
};

struct mstats mstats(void);

int main(int argc, char **argv)
{
	void *p = malloc(1024*1024);
	struct mstats stats = mstats();
	printf("Total=%lu, Used=%lu, Free=%lu\n",
	       stats.bytes_total, stats.bytes_used, stats.bytes_free);
	return 0;
}

Returns 0 for everything. Perhaps, I should really instrument/wrap all allocations as a fallback, at least in the code I control… Won’t be precise but might at least prevent crashes on otherwise unsupported platforms.

By the way, I’m running the following Version:

> uname -a
Haiku shredder 1 hrev54154+111 Jun  7 2020 07:16 x86_64 x86_64 Haiku

Looking back at this problem after several years. I now entirely replace malloc() on some UNIX-like platforms, where this is relatively easy.
Would that work on Haiku as well? Can you define your own malloc() and - most importantly - will your custom version of malloc() be used by all dependant dynamic libraries as well?

As long as it’s the first one linked…

I don’t see any possibility of that, however.

We routinely do it using libroot_debug as a replacement of libroot to use a debugging implementation of malloc. I guess a similar system could be used if you want it.

1 Like

You mean via LD_PRELOAD? Theoretically, yes that would be possible. I could use a wrapper script that defines LD_PRELOAD before execing to my main executable. The memory consumption values could be communicated back to my main program by defining a dummy function that’s overwritten by the preloaded library. But, it’s all sort of hacky and would be useful only on Haiku right now, significantly complicating the number of hacks I already have for all other platform. I am not willing to go that far.

Does malloc() still use a single memory area and what are your plans for the future?

1 Like

It still uses a single area, the plan is to replace that when we find time to research it and a suitable replacement. Previous attempts have caused memory use regressions, especially on 32bit systems, so I guess the project is on hold for now.

Yes, libroot_debug is done via ld_preload, but I guess there are other ways this could be done? Not too sure what kind of ELF magic we have at our disposal here.

btw. Behaving like Linux in this regard, ie. allowing a program to overwrite malloc() globally including all dynamic libraries, without having to preload any dynamic library, would be a useful thing to support anyway. SciTECO is by far not the only program trying to replace malloc().

Is this “behaving like Linux” or “behaving like glibc”? I don’t know that musl allows this, or FreeBSD’s libc. Applications that want to portably replace malloc should do their own internal abstractions.

This is how Linux behaves and it should work with any libc implementation, at least as long as it declares its malloc() as a weak function which should be virtually all libcs. Replacing malloc() this way within your own executable should be more or less portable (as long as the platform supports weak symbols). But it should be the dynamic linker responsible for making this work even for all dynamically loaded libraries.

Overwriting malloc() globally does indeed work on FreeBSD as well. It doesn’t work (that easily) on Mac OS and Windows, though.

You cannot - generally speaking - portably replace malloc, nor do all libraries allow customization of the internally used allocator. For instance, I am using glib (from GNOME). They used to support allocator customizations via special APIs but in their eternal wisdom decided to deprecate them.