Closed hardware monoculture, computer architecture and reputation

As a continuation of a tangent on the Fuchsia roll-out thread:

SBCs are self-contained SoC based computers. They use one chip for graphics, sound and data-processing in the form of CPU cores. They are easy to support because of this. Performance is often compromised because of the unified memory architecture associated with them. They use vector units to speed up what would have otherwise been poor performance GPUs. The upside is that the drivers are more easily reverse-engineered to run on alternative operating systems like Haiku. They are often cheap ARM and AArch64 boards with RISC-V now entering the arena.

In the Fuchsia thread, I mentioned how Commodore rolled over and died in 1994 because the C64 didn’t sell as well once the 16+ bit CPU embargo on Russia and Eastern Europe dropped. I also stated that monocultures are temporary. Tying in what Fuchsia represents to the industry, it appears that Fuchsia is going to try to target everything that Android presently runs on as well as ChromeOS. Since Android is ARM and AArch64 based primarily, and ChromeOS is x64 it appears to me like they are going to try to use WebAssembly to bridge the compatibility gaps between them.

One position I have advocated in the past were using WebAssembly with the package manager/app store to make packages cross-architecture, thus ARM and AArch64 could potentially run the same software as x86/x64. It certainly wouldn’t hurt RISC-V as well because SoCs can map their custom architecture directly into the instruction set, shifting the burden of compatibility from the runtime drivers to the install-time bytecode translation.

Another position I have advocated was making the OS as small and fast as possible without compromising design considerations regarding reliability. The smaller the system overhead is, the better the caches work. The smaller and more reusable the code is the less code has to be swapped to the hard drive. Memory hierarchy optimization is the key to overcoming many performance obstacles.

One last position I would like to advocate is that we don’t entertain commercialism as a means of exploitation. Crony capitalism isn’t as good as the original American Dream was initially designed to be. Sometimes businessmen and businesswomen need to cut deals to get things done but never forget that it is the customer and consumer of goods that we are serving, not the oligarchs of the tech sector and public sector. Overcentralization of power is all kinds of evil at once. When a few executives of a web platform can deplatform and help depose heads of state, a line has been crossed that we should never cross. We need to be as good as what Google started out to be with their “Don’t Be Evil” motto. Not as bad as they have ended up being by contributing to global imperialism and oligarchy.

I’ll step down from my soap box for a while now. I’m interested in hearing your responses.

1 Like

I am confused why you think fuchsia would use WASM, it seems like a really bad choice, especially since android already has ART and flutter compiles to native arch directly, why would they go this step of indirection?
Really all fuchsia needs to run android stuff is to implement ART and that’s it.

It also seems a bad fit for haiku, if we want to install ports on another arch we could just use qemu for that, slowing down execution of native SW so we can run it on a potential different hw seems like a poor choice, especially since debugging would be much harder in this setup.

When running natively (not in the browser) WebAssembly isn’t any slower than C++ compiled with Clang. Why do you think it would be slower?

I don’t see a distinction between running in the browser and native, I don’t have a WASM cpu and i don’t know anyone that has one, thus we need to transpile it back to native arch… which means the compuler has even less info about the target, I don’t see any way this could be native speed, and i consider that quite important since Haiku makes some of my older HW usefull that linux can’t effectively run on.

That’s the problem. Browsers are bloated monstrosities and have much of their infrastructure still geared toward JavaScript JIT compilation. AOT compilation is totally different.

Javascript JIT compilation also has little to do with WASM, WASM is the same in and outside of browsers, that was kind of the point too.

The similarities end there. WASM is the ONLY part that is the same inside and outside the browser. The WebGL is an abstraction on top of OpenGL-ES2. HTML5 is built on top of a huge runtime library, as is CSS3. It’s not WASM that is slow, it’s the abstractions that run on top of it in the browser.

Edit:

Here’s an analogy: Bytecode is like a bookmark. If you put a bookmark in the book and finish reading at another time, you still read the same words from the book. Abstraction layers in the browser are like adding addendums in the back of the book. It increases your reading time whether you bookmark it or read straight through.

The point I’m trying to make is that on RISC-V the drivers for a SoC will be embedded in the instruction set of the CPU. There will be NO NEED for external drivers if the added instructions are incorporated into the bytecode compiler. That will eliminate the need for addendums and abstractions, making the computers faster rather than slower.

Here I agree completely. My frustration came so far, that I started thinking about not using the dom/css , and instead writing my own gui elements just on top of WebGL2 while also using WASM, but I think wasm is not yet mature enough [especially threads I am missing], and what is inconvenient is the fact that you can not store wasm modules with service worker, meaning slow/delayed start.

1 Like

That’s why I think Haiku needs Wasmer.io in its repetoire. Wasmer is based on Rust runtimes rather than browser abstractions. The fact that Wasmer is being billed as WebAssembly for the server is almost irrelevant. It can be built on top of an OS directly and make the browser unnecessary in the long run.