Interesting side question here - I know some work was previously done on FatELF support, but the only reference I can find to it on the discuss is @PulkoMandy saying “We don’t want it.” Is that something likely to be reconsidered now there’s a viable non-x86 port? Seems like it cleanly solves a couple of problems quite cleanly, without breaking compatibility with BeOS apps or other already-compiled code.
Which problem do you think it solves?
I still agree with younger myself: it requires compiling the same code several times meaning longer compile times and larger packages, it is difficult to properly test a package or executable, and I don’t understand which problem it solves.
The “How to have multiple CPU architectures without making life hell for anyone using the system” problem? Not all software can or should be in the curated system repo, and having multiple architectures without a universal binary format makes downloading software a nightmare.
I mean, MacOS has demonstrated this neatly on multiple transitions - even Halo was a Universal Binary, while on Linux issuing software as a binary means an ever expanding number of download options or mandating people install your custom repositories.
If “Haiku on RiscV/ARM/POWER” is a valid concept, then software intended for Haiku will already need to be compiled multiple times. Allowing those binaries to be packaged into a single “runs on anything” executable just makes it easier for software to actually be distributed once it’s compiled. Same for testing - if “non-AMD64 Haiku” is a valid concept, developers are still going to have to test for those systems; FatELF doesn’t change that - it just makes distributing the software afterward less hellish.
Larger executables? Sure - but in practice that’s never been a real issue. A 25GB game is not 24GB of compiled AMD64 code and 1GB of resources. This also wouldn’t apply to the base OS since that could (and should) be installed with only the target architecture. Plus FatELF on Linux can easily be extracted into ELF binaries for those hardcore space-tuners, just as people did with Tiger. I don’t see why this wouldn’t be just as possible on Haiku.
The real question is - how else do you solve this problem better?
There was some minor popularity for fatElf pre-package management, but haiku package management was the solution we went with for a lot of reasons. The biggest is deduplication of dependencies. It really reduces your application size if you can depend on shared libraries.
Look at go-lang applications. They’re great for your company’s microservice, However go applications can balloon to 300MiB+ due to their static linkage.
Another reason is security. Consider this… you have a severe CVE for a dependency in your application… Lets call that dependency log4j. With package management you update the log4j package. CVE patched. However, if you have 50 apps, 30 of which pack log4j in them… you now have to update 30 apps or risk a compromise.
There’s a reason package managers are the most popular solution. Flatpak and Snap are two solutions gaining steam on Linux, but they offer sandboxing to help reduce the scope of compromise if a single app has a vulnerability.
None of these are perfect reasons (thus why everyone discussed fatElf for so long). FatElf seemed very “Be-like”. I personally would like to see it explored again, however we need package management for the huge number of ports we enjoy today.
I’d love to see a Haiku FatELF with ARM/AARCH64/PPC/PPC64/x86/x86_64/SPARC/M68K/RV64 just to be able to marvel at such a thing (also because such a file would mean further progress on some of those ports).
Also, I’m pretty sure FatELF doesn’t mean everything is statically linked. Isn’t it like Fat Mach-O, where there’s a header which has a simple table with architecture code and offset to that particular Mach-O within the file?
When MacOS Snow Leopard dropped PowerPC support I immediately gained 25 GB of free space that would not have been possible to access on my 80 GB boot device. I’m with PulkoMandy on this one.
I’d rather see a package manager bridge between WAPM than see loads of hard disk space wasted on unused architecture binary blobs.
Our package manager makes it easy to add extra repository and that’s how we expect most people will distribute their software. Besides the “pick the right cpu architecture” problem it also solves distributing updates. All while not introducing extra problems: extra disk space usage, longer build times, and annoying setup since it forces you to cross compile for most architectures.
Also, Apple only ever did two architectures at a time, so an overhead of at most x2. We have already 3, and with arm and arm64 on the way we may soon get 5. Not to mention sparc, ppc and m68k. Certaily a fatelf that is up to 8x bigger than a normal executable, and 8x longer to compile, is a very bad idea.
Apple distributed (almost) the entire OS as Universal Binary. There’s no need to do that with an OS that has a package manager. In a Haiku world, the only time someone would need to use FatELF is for distributing an application that is not in the Repo. To give some examples from my laptop - Skype, Discord, Telegram. Or from my work desktop, Gemalto and SALTO.
Also, there was a one-click tool (admittedly not supplied by Apple) that would strip the binaries down to a single architecture. In FatELF there is a command line tool fatelf-extract that does this per-binary. Adding this to a package manager would be relatively trivial for those that want it.
Sorry, that is wrong.
In its height, you had Intel 32 and 64 Bit as well as PPC 32 and 64 Bit support, so you had x4.
I really doubt that. If you look at a macOS application, the binary is only a small part of it. Most of the size of an application results from the localisation or the NIB files for the UI.
If you refer to those “slimming tools” back then, these would also remove some less important language files as well.
But I do not see how this can be adapted to Haiku. The OS may support FatELF for 3rd party applications, but I don’t see why a Haiku release should be provided as a multi-architecture image.
Of the commercial applications I install on my laptop, the only ones that offers a Repo are Microsoft Edge and Microsoft Teams.
“Just use a repo for everything” is a thoroughly tested workflow that did not survive contact with reality!
The way MacOS X did it is completely different though IIRC. There was a physical binary for every architecture, and the Whatever.app file structure just held all the files. This is surely more akin to the fat binaries from the 68000/PowerPC era? It the first transition, the file format for the exe was built around code fragments and the binary could contain code segments from both architectures. The remnants of this can still be found in BeOS PowerPC, because the PEF exe format can contain 68000 code fragments (but obviously doesn’t as that would be pretty useless.)
As someone said, there have always been tools to remove the other architectures. Though in the System 7.5.5 and prior days, this could break the OS as half of it was still 68000 code. I think MacOS 8.0 was the first PowerPC native.
Why would you pack Log4J in a package? It’s an open source tool, that I would expect to be in system repo before I used it as a dependency for my commercial application.
If you did ship it, why would that change based on whether you’re shipping five packages (x86-R1, x86-R2, x64-R2, RISCV-R2, ARM64-R2) or one? Especially since log4J is compiled as java bytecode and isn’t architecture-dependent in the first place?
FatELF does not affect the use of shared libraries, or improve or degrade the idiocy of how people pack software. It just means they only have to ship a single binary for all OS’s. If anything you could argue that making it easier for people to download and install software reduces the burden on users to patch security issues for non-repo applications.
No, they’re embedded in the Mach-O binary just like FatELF. You can see this if you extract say - TextWrangler 3.5.3 with 7zip. In the “MacOS” folder, there is a single “TextWrangler” binary - with the four architectures mentioned above embedded in it.
As a side note, iOS packages are reportedly also fat binaries, with 7 architectures in them. The App Store then strips the binaries during install to save disk space on the phone or tablet.
It doesn’t affect using shared libraries, it doesn’t have security implications, it doesn’t mandate shipping the entire OS or everything in the Repo as FatELF (though it certainly COULD be, if you wanted to make installing the OS easier at the cost of disk space), and it doesn’t break compatibility with existing binaries.
While WebAssembly may not be a native format, it should produce much smaller files than FatElf. It has its growing pains, to be certain, but is also actively developed throughout the industry.
The shortcomings currently include difficulty generating 64-bit native apps without a JIT and big-endian compatibility is not yet an option. Difficulty supporting 256 bit and 512 bit vector registers or any size other than 128 bit may be a problem too but I haven’t looked into that deeply.
It is also more future-proof in that compatibility-breaks in architecture at the CPU level and changes in operating system don’t cripple it. It provides a migration path off of mainstream OSs in that it can be made compatible with any arbitrarily capable OS. Ultimately, that last feature may be the first nail in the coffin for stable ABIs other than WebAssembly itself. I don’t fully appreciate the duplication of efforts caused by profit-driven operating systems and their separate ecosystems of vendor lock-in.
Actually Gentoo was based on source code builds.
It wasn’t a repo like git per se, but it was all source builds.
That made me learn a lot about linux, but it was a pain.
There is a philosophical question here:
What is the purpose of Haiku? to be another linux? to be a geekOS? To achieve mainstream?
When that question is really answered and there is someone at the helm of Haiku with a clear goal in sight that is steering the ship, all these questions can be answered.
But, and I will say but, even microsoft adhered to the App Store concept which is nothing but a package manager of sorts(still allows for any binaries, but so can android with the APKs)
I guess the idea is to have a progressive OS based on progressive ideas that were inherited from BeOS, thats feature rich, supports portability by supporting e.g. POSIX. Also its goal is to be business and open source friendly due to MIT license. Kudos to the whole Haiku team. I cant even use Haiku yet on a daily basis, but I see that things get shaped step by step. I am very impressed.
Isn’t POSIX the definition of archaic? I get that it’s useful for compatibility though.
Archaic APIs do not necessarily need to be bad. But for sure some are.
POSIX is love, POSIX is life