Fix QtWebEngine or work on WebKit?

Returning to the case of building of the qtwebengine with the debug option - even 32 gb is not enough for it, and at least 64 gb will be needed. Everything hungs at the linking stage now. And even if the mold linker may still help for linking using gcc12 then the final binary file still cannot be used for debugging since the size of .so file is 4gb, that is extremely large for built-in Haiku debugger. In this case gdb 10.2 could be a salvation and it was even compiled a year ago by cocobean:

But seems it needs Haiku-specific kernel debugging interface, that is probably was not implemented yet. Also rr debugger exists, but it need be ported first. Anyway we need to produce an experimental build on the system with 64-128 Gb of ram and try to get debug output from the compiled binary. @Munchausen has 128Gb machine on demand so if someone could install Haiku on it, compile qtwebengine with the mold linking and check how the executing of the binary will affect the built-in Haiku debugger, it would be great, and I think this is the only thing we can do at the moment for understanding where the qtwebengine falls - is it JIT or css parser stage. The sources with patch are located here:

2 Likes

I’ll see what i can do for a builder

This should sound grotesque enough to everybody to start a revolution and reinvent web where a webbrowser doest want to do everything.

2 Likes

The problem is not the web. WebKit builds with 8 and probably even 4GB of RAM (and a bit of swap) just fine. It seems some developers, web or not, just… don’t care?

3 Likes

I think this is a good argument for breaking down large projects into several modules. Even Haiku starts to reach the state where doing independent building and linking would speed things up. At the moment Jam needs to collect the info from everything, so it is good if ‘everything’ is in smaller modules.

5 Likes

I can install haiku and allow ssh or remote desktop access if someone would like to have access to do this work?

2 Likes

You mean, make a new protocol ment to only serve documents, and then wait 30 years untill someone sais to start a new revolution and reinvent the web? : D

As long as the mindset of “More is better” exists you can’t win this fight

Google builds distributed across their entire data center. That’s why Build Ninja collects all of the build scripts into one huge Ninja Build script in what remains of Fuchsia. It builds across many cores in countless rack-mount servers with a seemingly unlimited memory pool.

The Haiku codebase seems pretty well organized to me and it’s easy to build smaller parts (just run jam with a specific target name).

No matter what you try, it is a big project and integration testing will always need the whole thing. Having various setups for smaller scale testing would be nice (unit tests, and maybe simply small test apps that exercise one specific part of the code manually).

But splitting the project more strictly into smaller modules makes it harder to refactor things or generally do work that spans multiple modules. So it goes a bit against the idea that Haiku works because everything is deeply integrated, and if you need to make a change that involves code all the way from kernel to app_server (going through apps, libbe, libroot, etc), you can do that in a single easy to review commit. Not coordinate between a dozen Git repositories.

This is not a fixed truth, for example NetSurf organization shows how it’s possible to run a project in a very modular way, and as it was mentionned recently, this is extremely helpful when you happen to need just one of the modules. For example I could reuse their CSS parser and wire it easily in Renga which doesn’t need any other part of HTML.

Both ways are useful, but switching from one to the other has big implications for the team organization and what’s easy or complicated to achieve.

In Haiku we definitely need at least more documentation on the available ways to test smaller parts of the system at a time, especially because some of these changed a lot with package management and not everyone knows all the new ways. But also maybe not everyone know the old ways that still works.

2 Likes

I don’t think doing modules would make things harder if you do it properly, and the build-system would need to do a lot less work. We are lucky to have SSD’s and memory to cope, but the …patience in each jam build is probably avoidable. Might be worth a thought at least?

That being said I am not arguing we should do it, but I do think it could be an improvement and can be done without the mentioned downsides. “mono-repos” has proven that part quite well already.
If I ever do a bigger project that is how I would structure it.

2 Likes

I have an nVME disk that’s very fast, so for me the “patience” part is mostly gone. But I agree it was a little annoying on my previous machines (fortunately, none with spinning disks for a long while now).

Anyway, this would be fixed by switching to a better designed two-stage buildsystem (for example cmake+ninja) without needing to split the project further. I would prefer that to a switch to modules that would make the full build more complex. It’s nice that building the full OS is quite simple, too.

I guess we’ll see how much Ham can solve without even changing anything at all to the build system design, just the implementation. And build from there maybe in future GSoC projects?

5 Likes
  • Updated to FFmpeg 4.2.7.

Seems good as I’ve tested old BeOS videos and Youtube videos. Using MediaPlayer and Web+ on Haiku hrev56340 x64.

4 Likes

Upping the theme, @3dEyes is currently working with qtwebengine patch adaptation for the qtwebengine 6.4.2. There is another hint proposed by @Shlyupa from the Telegram channel - use -g1 option for less debug info size in the final binary, it should reduce the binary size. Also a good clue to build qtwebengine using clang version from @X512

P.S. @Munchausen do you have access to 128gb server at the moment ?

2 Likes

Yeah its a workstation I have at home. Happy to set it up and provide remote access.

Can you install Haiku on it ?

Yeah sure. I haven’t tried it bare metal on there for many years but I think it will probably work now, and if not I could run a VM and assign it a lot of ram.

As an option you may install Haiku on a separate ssd or nvme.

If someone wants to use the machine for this task I will happily set it up and leave it on, but it uses over 100W at idle and will take a couple of hours to set up and configure so I will not make the effort unless someone is really interested. PM me if you’d like and I can get it up and running.

4 Likes

The earlier crash issues are resolved using the updated qtwebengine-5.15.2-2 on Haiku R1B4 x86_64.

1 Like

" Caution: Payment Handler API is only supported in Chrome as of January 2023. However, since Chromium based browsers already have the implementation, some of them may expose the API in the future." - Web-based payment apps overview | Articles | web.dev

1 Like