BeOS compatibility and packagefs

Yes, especially because there is some automatic moderation in place: if a lot of people flag a post, it is automatically hidden without the moderators having to do anything. We can undo that automatic decision, but it is just extra work.

The best thing to do if you want a thread to stop running is… stop replying to it. Let it sink way down in the forum, where no one will find it. This one would be forgotten already if you hadn’t replied with a request to close the thread :wink:

2 Likes

You’ll find that is not the case… it definitely not random. The non-packaged folder nonsense , lack of access the filesystem at a developer level by default as was always intended for BeOS as a power user operating system, and overhead are all real things.

What you actually have is some specific people getting petty and apparently personally offended that people that some people that use Haiku dislike how things were done during the package management phase.

There are actually quit a few positive discussions and effort poured into trying to make it palatable from this perhaps as well as personal drive of the developers of course…

I think the haiku package management is one of its greatest features. Immutability is all the rage these days, haiku did it before anyone else :slight_smile:

2 Likes

Haiku’s packages not 100% immutable. A virus or script can extract, modify, repack and replace the packages on your system, or simply override some files with a new package.

1 Like

That is true, haikus security architecture is unfortunately more none existent than lacking, I woul highly recommend OpenBSD or qubes to anyone who is concerned about security

All dubious claims. It might be faster than the old method but other implementations of package management don’t do that at all and are also fast while being relatively simpler.

In fact my experience is that package downloads on Haiku are exceedingly slow (is this some network kit issues)? And that package installation is even slower than Linux… and prone to hanging.

A always wonder why Linux do all that stupid work on extracting a lot of small files on install that can take near a hour while Haiku installation complete with a few seconds.

It is caused by limitation of TCP implementation so window size is too small by default and can’t grow, so transfer speed decreases with ping time growth.

One would also wonder why Haiku spends the CPU cycles to extract the same files every time they are read instead of just reading them from SSD… its a trade off. And the hour long Linux installs are typically full of a lot of bloated software Haiku doesn’t have either. I tend to take Arch’s Pacman or the like as a reference for a good efficient package manager, while APT and YUM are kind of insane.

Meh, its a tired topic though… the other things discussed here so far are more interesting at least as some of them could get adopted and improve haiku.

Boot time is also slower due to the packagefs implementation.

So, that needs to be fixed right?

My ping to ping https://eu.hpkg.haiku-os.org is around 108ms which does seem fairly high but imagine how bad download speeds for Haiku packages would be on say hughsnet satellites etc… that has ping times around 600-1200ms.

Oh, so that’s why my package downloads are always at around 200-250 KB/s despite network bandwidth and conditions; I’m just always geographically far enough from the repo server for the bandwidth to never be fully saturated.

Yeah I think something is still a little wrong because even with the long pings it should still be going faster… eg I can download a Debian ISO in firefox on Windows from france or denmark at around 8MB/s while Haiku is going much slower than that around 250KB/s like you said.

I do notice the download speed ramps up to that 8MB/s over time (this is probably an effect of the windows size changing). This isn’t too big of a deal in Haiku for small packages but for like IntelliJ or other large packages it can take quite a long time just to download…

Installing a single package is an atomic operation, Installing multiple packages should not take any significant time (as in, much faster than alpine linux pkg, which I would consider a sane „traditional“ package manager)

If you have any hangs or slowdowns please do report them, I‘ve not had either so am a bit suprised by this.

(For me only the forced repo refresh pkgman does when installing local files is slowing it down for this relatively simple operation)

Yes, definitely. Iirc there is a review for this on gerrit too.

1 Like

it doesn’t, that would be stupid…

The packagefs stores the uncompressed files in RAM, in the file cache. This can be removed from memory and later re-extracted, but only if you run out of memory. On modern systems, the RAM is often much larger than the complete uncompressed system, so, it’s not really a problem.

5 Likes

You can argue that all you like but abusing free ram isn’t completely free, and its not even possible to do effectively on low ram systems.

It’s also violating the basic storage hierarchy… you have a fast disk, why are you trying to optimized storage space on disk when the disk is the cheapest of all the memory types. It’s classic premature optimization and overcomplication of something that should be super simple.

You can make uncompressed packages if you want. It doesn’t change much in term of disk access performance if you run an SSD (it does if you run a spinning hard disk, which was the case for most people when this was developed).

It also makes the install media smaller, and the download of updates faster.

1 Like

So would have compressing the packages to begin with in any other format as well… like literally everyone does this.

HPKGs are already compressed with ZSTD, which is widely considered to be the best overall compression algorithm and has been gaining traction for years. The next best open-source option would be XZ, unless of course its library’s recent security incident is taken into account.

My point was that is a BAD thing at runtime… unless you have hardware acceleration of decompression like consoles have, you are wasting CPU cycles or memory space to do that. Rather than implementing some complicated caching scheme yous should just let SSDs be fast with a simple caching scheme (otherwise you are pushing complexity and higher latency caches up in the stack).

And yes… everyone uses some form of relatively good compression these days.

I think that’s at least two misleading things…

First, “the best overall compression algorithm” just makes no sense at all. It depends what you want to do with your compression. ZSTD is a good choice when you need fast decompression, and still a pretty good reduction of size. But, what if you want something even faster? You can try snappy. Something that compresses more? There are a lot of choices, the latest ones being based on neural networks, and needing a last-generation GPU to run the decompression.

As for XZ, XZ is not a compression algorithm, it’s a file format. It uses LZMA compression. The HPKG files do not use an existing compression file format, because few of them are designed for random access. Which is required for HPKG files. So, I don’t see how the recent security incident would be taken into account here. We may still use the same compression algorithm, but a different implementation, for example.

Maybe we are wasting them, or maybe we are using them wisely. Only a benchmark could tell, and it depends a lot on the hardware specifics (SSD vs spinning disks, slow or fast internet connections) as well as the use cases (how often you need to extract a package, or download one, vs how often you access it).

It seems you have made up your mind on the results already without making any measurements. So, we can argue as much as we want, there is no data to back anything, and the discussion will not be very useful.

2 Likes