getentropy requires more work because we need to actually collect entropy somewhere in the kernel. Help welcome for doing that
We don’t want to introduce the function until it can return actually useful results, that would be misleading people into thinking they have a reliable entropy source, when there isn’t one.
Please forgive my ignorance, but since /dev/[u]random seems to be available already in Haiku, can’t syscall getentropy() use the same source of entropy as the /dev/[u]random virtual file?
They exist but they don’t work very well. /dev/urandom will only provide a handful of bytes before blocking because of lack of entropy. /dev/random can provide lower quality pseudo-random numbers in that case.
The plan is, I think (but other people may have other ideas):
implement getentropy() and collect entropy from more sources
implement arc4random() (patch is already ready, but it is useless - not safe for cryptographic uses - without a working getentropy())
rebuild /dev/urandom and /dev/random on top of arc4random and replace the current implementation
The destinction between urandom and random is a bit wierd, it differs per OS. (and per linux version)
I think the “state of the art” Is to have neither block, ever, except on boot when the entropy has to be collected initially. (Which is irrelevant for Haiku because we can use some saved entropy from the last boot)
The “during early boot” specificiuy itself is a linux oddity I think. On other systems both random providers just provide randomness and entropy at any time, and it’s encouraged to use another API instead of devices anyways (something like arc4random). They achieve this by storing some entropy from the previous boot on disk, so it is available during the next boot sequence.
And yes, possibly I am mixing up random and urandom here. It doesn’t matter, when this rework is done they will just behave exactly the same.
To my understanding the difference between /dev/random and /dev/urandom is that /dev/random maintains an estimate of how much entropy is left in its entropy pool. If that value drops below the requested number of bytes, it will block until enough “fresh” entropy becomes available. Conversely, /dev/urandomnever blocks – once the entropy pool has been initialized. Also, I think that early in the boot process, when the entropy pool has not been initialized yet, even /dev/urandom may block, because… what else should it do in this situation? But that applies to /dev/random just as well.
I think Linux and BSD’s mostly agree on this, except that, in OpenBSD, there is no distinction between /dev/random and /dev/urandom; instead /dev/random is an alias for /dev/urandom on OpenBSD:
getentropy() provides random bytes suitable for seeding a PRNG, directly from the kernel’s entropy source (same entropy source as that used by /dev/urandom), whereas arc4random() is a pure “user space” PRNG, which is (re)seeded from getentropy(), but doesn’t involve the kernel otherwise.