Haiku and FatELF

This is how java works :slight_smile: It was an example of something getting exploited and everyone building it into their java apps. It’s why the log4j CVE is so messy. Java doesn’t depend on shared libraries for things like this and vendors them into their jar’s

In the last few days i’ve had to investigate roughly 10-15 java apps professionally for exploitable log4j versions, and verify gerrit and haikudepot (our java apps)

It’s a rough equivalency to fatelf packing shared libraries within apps

The fatness of ELF doesn’t require static compilation. FatELF
But staticly compiling would be the same thing, and is what you describe. I’ve noticed similar with some Mac OS bundles, where some libraries aren’t statically compiled in but included with many bundled .app, .bundle, or .framework bundles.

Except that your statement about FatELF is completely wrong. FatELF does not require this. FatELF only bundles additional architectures of the same code. I don’t know why you keep talking about dependencies, because this is not in any way anything to do with FatELF at all.

Edit: Lets say you have an AMD64 ELF binary Helloworld, that links against the shared system LibC.

The FatELF version now contains the x86, AMD64 and RISCV64 binaries instead of just AMD64. But they still link against the shared system LibC, it’s not suddenly packaged in the FatELF. The exec just loads whichever is the right architecture for your system and runs it with the associated shared library (which can either be a fat binary, or single-architecture).

I’ve researched this issue extensively, outside of slightly more work in maintenance of repo’s, Fatelg creates more problems than it solves

Could you elaborate on this a little? I appreciate your expertise, but as a novice I can’t what issues that FatELF creates other than the extra cycles in exec to select the binary and the increased binary size.

Compilation of code becomes increasingly more tedious

That’s… really not an issue. Adding support for FatELF doesn’t prevent people compiling a single-architecture ELF if they want to, and compiling FatELF isn’t any different from compiling six different architectures separately - which if Haiku ports are a valid concept, will already be necessary.

I thought this was a technical concern???

what happens if x86 gets a newer version of SDL (lets say some new SDL3 version), but arm still has SDL2?

Execution on x86 works, execution on arm breaks. That’s why people pack libraries with self-contained applications.

I don’t think fatelf is a horrible idea to supplement our package management, but without anyone releasing applications for it… it doesn’t matter.

With that said @mo-g , are you going to add support for faltelf to our libroot and submit the code to review.haiku-os.org?

what happens if x86 gets a newer version of SDL (lets say some new SDL3 version), but arm still has SDL2?

ABI-breaking changes should only happen with a major OS release and should be co-ordinated with stable ports - just as it is with say Red Hat on AMD64, ARM64, POWER and Z/arch. Ideally Haiku would aim to offer the kind of stable ABI back compatibility that Windows does, just as R1 targets backwards compatibility with BeOS apps. But at the very least, it’s not going to be “nightlies with breaking changes” forever.

There’s a second interesting question that leads on from that; as to what architectures Haiku should treat as “official” after R1. From my perspective, AMD64, RISCV64 and ARM64 are the most relevant for a desktop OS in the 2020’s (since these are the architectures with ready and increasing availability of desktop and laptop hardware) - but this is one where my opinion is worth little to nothing!

I don’t think fatelf is a horrible idea to supplement our package management, but without anyone releasing applications for it… it doesn’t matter.

Without OS support they can’t release binaries for it, but without available binaries for it we can’t add OS support? :wink:

With that said @kallisti5 , I’ve just forked landonf’s previous patches from way back when. I’m not going to pretend I’m a good coder, but I’ll take a look and see what I can do over the winter break. :slight_smile:

Wish me luck!

The technical concern is over uneeded complexity.

k.I.S S.

I think the idea would be to ensure all supported architectures are kept in line with each other, especially to prevent bitrot. It would be sad if architectures are lost simply because they weren’t maintained over the years (like the PPC version, for example). If maintained properly, shouldn’t be an issue.

But I also take your point, FatELF right now is a solution looking for a problem, and there’s far more other pressing things to address. It would be cool if at least the kernel supported it, so it would load a FatELF if one was encountered in the future, but not make everything FatELF.

1 Like

I mean, that’s where I stand on it as well.

I saw this exact phrase used all over the place in the Linux commentariat when I was researching FatELF the last few days. I’ll say this - I actually had no idea FatELF existed until I was looking at the Haiku RiscV port. And then straight away I thought “Haiku’s packaging system allows drop-in installation. How is that going to work with multiple architectures?” And went googling. FatELF is useless on Linux, because the “Distro model” ensures you’re going to need fifty different packages of your software anyway. Which is fine, but basically destroys linux - well, every UNIX except MacOS - as a desktop OS.

Haiku doesn’t have this problem. As long as either the native API’s stay stable, or some method of backwards compatibility is preserved, then FatELF does lower the burden for end users of a multi-arch OS significantly. :slight_smile:

Hmm, nothing? The ARM binary will still link to the old SDL version and the x86 to the new one.
I mean, FatELF is just a container and for the runtime loader it will always look like a ELF binary for the correct architecture.
As an example, on macOS, I can still create an application on PPC for OS X 10.5 and also compile that on x86 with OS X 10.6 and on x86_64 on macOS 10.9 or higher, combine these mach-o binaries into one binary with lipo and distribute that.
And rest assured, all those macOS versions have different dynamic libs which are used then accordingly.

No please, we just got notification of a CVE, so it is compromised
The critical vulnerability iis called Log4Shell or LogJam JAVA CVE-2021-44228
It affects the log4j package

I know, it’s an example :slight_smile:

1 Like