Before LLVM’s Flang compiler existed for Fortran, there was a GCC plugin called DragonEgg that would allow LLVM’s code generator to be used with recent versions of GCC as a means of using GCC’s Fortran frontend parser. Is it possible to create a split in LibStdC++ for GCC2 runtime compatibility as a plugin in a modern GCC version in the same way? Also, is LibC++RT any more flexible than LibStdC++ in that way?
These questions are not just geared toward getting the GCC2 ABI modernized but also could open other possibilities with my attempt at a WebAssembly package format. It will need Clang on the frontend and something with high-quality link-time optimization on the backend.
After some research I’m beginning to wonder if this would relate to the LibC replacement ticket. LibC++ work would likely depend on it.
The dependency on GCC2 is no-so-promising but trying to implement LibC and LibStdC++ on top of a more modern architecture looks like it would be a pain. Maybe a BeOS emulation layer would help make compatibility more future-proof. All of the Amiga-like next-generation platforms have had to resort to using Amiga emulation because implementing the famous Amiga multimedia chipset would require an FPGA chip to implement on modern hardware otherwise. I wonder if this would be a route to go past release one of Haiku.
I don’t understand why you are talking about libc here. The C library already works with different gcc versions and is not a problem here.
The main problem to keep gcc2 apps working is the gcc2 ABI and mangling. This is the way the C++ compiler translates a C++ name (something like namespace::class::method(parameterType1, parameterType2), with additional cases for templates) into a symbol in the ELF file (which is a string of letters, digits and underscores).
BeOS used the gcc2 convention for this. At some point they noticed that the way they were doing this made it possible to have two different C++ things end up with the same symbol name. So, they changed the way they were doing things. The new ABI they use is called “Itanium” because it was originally defined for Intel Itanium CPUs. It is now a de facto standard for at least gcc and clang.
So that would be the main problem to solve: either patch a newer compiler to use the old ABI, or write some tool that can generate symbol aliases (I think this could start as a shell script using readelf to list the symbols, and the two versions of c++filt for gcc2 and gcc8 to convert the names).
There will be some other surprises: parts of the gcc2 libgcc may be needed, maybe the new compiler will have a different idea of the padding on a class. etc.
But, honestly, I think pretty much everyone at this point is happy with a much simpler solution: do not care about BeOS compatibility. Make gcc8-only apps. Use the 64bit version of Haiku if your hardware supports it. BeOS compatibility is not porth putting all this effort on it, and you only risk keeping it alive for longer than it really deserves.
In the preprocessor macros to consts thread or whatever I called it, you pointed out that GCC2 has bad dead-code elimination. I assume that is because the OS is compiled with an updated fork of GCC2. I’m looking for ways to improve that function.
Due to the link-time optimization requirement of the WebAssembly package format I proposed and am looking into, I may require that LLVM be used to install the packages. Clang 12 is in HaikuDepot so that shouldn’t be a huge problem.
This leaves the problem of making binaries architecture-neutral when ApplicationKit headers try to include these architecture-specific preprocessor macros. Obviously, that can’t be allowed. My solution was to convert the preprocessor directives into const directives so that their definitions could be deferred until link-time with an architecture-specific linker library on the destination platform.
For backward compatibility, native code generation that used to depend on those preprocessor macros would need good dead-code elimination and constant-folding to process const directives.
The only other option is to give the bytecode architecture a different header set that doesn’t include those preprocessor directives at all. This will reduce source-code compatibility but is an option.