Use cases arguments for Haiku in 2021?

i use haiku for 85% of my needs

2 Likes

I use Haiku.

3 Likes

Deskbar preferences → show application expander + expand new applications

At least this one was easy to fix :wink: .

It only works in vertical mode, however (deskbar on the left and right)

I think there is a huge potential market, especially when you add in the fact (vey important for me) that Haiku isn’t trying to steal your data and spy on your every move.
George Orwell was right in his prediction that ultimately we would all be spied upon. It began slightly later than 1984, and he was wrong in thinking that it would be the state doing it. It’s actually Microsoft, Google, Amazon, and Apple, but although the enemy is different, it’s almost as bad.
And in China, the state IS using techniques developed by private companies to control their citizens. Western countries will follow suit if we are not vigilant.

What is the draw for WebAssembly? Most of the WebAssembly stuff I have seen can be done better as a native app. I don’t think the browser ids the right place for apps (and never did, even at the height of Web 2.0 or the pre-2001 Web bubble.) It seems a lot like a crutch to justify Web first most of the uses I have seen. Am I missing a use case? I have to say, I don’t really think “cross platform” works. I have been a pro developer since '98 and no one has ever made a cross platform framework that was everything to everyone. Most are a massive compromise. I use one and develop against it daily… it is very clever, but it is still not native.

I also don’t know why we care about Fuchsia. Haiku is never going to be a mainstream OS. It will always have a niche. Without a massive focus shift in popular culture, all we can achieve by constant trend following and “me too” is an also run OS, that “should have been big”. Sounds a lot like BeOS. I think chasing the white rabbit constantly is going to mean the niche is too wide and Haiku will become another kitchen sink OS with no real purpose.

I think you are a bit pessimistic.
A lot of people would have liked, and would like, to switch to a non-commercial OS, but the only real working alternative to the Mac/Win duopoly for more than twenty years has been Linux, and Linux requires a much bigger learning curve than most people are prepared to invest in.
A simple, fast, independent, OS has HUGE potential in my view.
And yes, it’s got to be kitchen sink. It’s got to be able to do what most people need. There is nothing wrong with that. That’s its purpose.

Sure - but it needs to set clear goals. Adding new requirements before the base is ready will mean the base is never ready. So, yeah - Do we care about Fuchsia? It is not a general purpose OS yet. Do we need to implement every technology now? No. Get what we have right first then start raiding the kitchen to pile up more dirty dishes. I don’t think many users really understand how hard it is to get a stable release out with what you have, let alone adding in a bunch of extra requirements that have crept in to vogue recently. I remember when “if BeOS just had Java, it would be perfect”, and all the time that was spent on trying to make that happen… despite the fact that by the time Java was being demo’d by @BryanV, it was almost at the point where it was out of vogue and browsers used it less and less. Then it was Flash… The truth is, chasing rainbows will never get you the user story you want.

1 Like

A big +1 to this. One of the main reasons I use Haiku.
I think we should try to promote and advertise more that aspect of the OS.

First of all: I view browsers as needlessly complex. The abstraction layers used to render hypertext make “web apps” cumbersome to use. The fact that WebAssembly can be used outside the browser with technologies like Wasmer, should allow browsers as applications to be eliminated. All it really needs is access to I/O infrastructure including a native GUI and suddenly applications are less bound by compatibility.

Secondly, my views of cross-platform architecture require cross-architecture bytecode as a stepping stone. The difference is that an OS can become independent of the underlying instruction set long before cross-platform becomes the least common denominator for all applications. Someday RISC-V can implement graphics instructions from Vulkan and having a cross architecture bytecode will smooth the transition to driverless architecture for graphics. Maybe other technologies will emerge that break compatibility with native code but having apps and libraries stored as bytecode will smooth the transition there too.

Summing it up: WebAssembly by itself may eat the lunch of the rest of “the Web” by integrating OS functionality in the bytecode.

Sure, but WebAssembly, isn’t it just another take on the same problem that Java and DotNet also solve? DotNet compiles to CIL, which is bytecode. I guess, I’m already where you want to be and telling you - it is not all that great. It is all things to nobody, really. I have code that runs on Clasic .Net (4.x), DotNet Core, Mono and DotNet 5.x. Same code. Android, Linux (IA32, X64, ARM, ARM64, MIPS), iOS, Windows, MacOS X, PSP… many more platforms. Compiled once. I can take the same code and run is in an interpreter on a PSP. But here’s the thing, the performance is sub par when compared to native. All the modern VM’s are JIT’ing and AOT’ing the byte code these days because bytecode is slow. When you are JIT’ing and AOT’ing code, you know what that really says? You should have used a native compiler and cut out the middle stage. Native will always be necessary. Native is always a better target.

2 Likes

I see integration of low-level OS functionality into custom silicon as the source of performance. Using a bytecode to represent it is a means to that end.

This is still all unproven, and very much in the future though. It also sounds a little like the song the Syllable OS guys were singing with Rebol. For me, this sounds like a research project, not something for the devs here to be doing any time soon. Maybe you could fork Haiku and prove me wrong? I guess that would be cool to see? For me, anything that does near rendering of graphics or low level drivers is not a use case for this. And if all you are doing is creating an IL that is transpiled to native code, well, it seems way too convoluted.

1 Like

It’s actually earlier. In an era when everything graphical was done with a CPU on unified memory architecture graphics, the Commodore Amiga had a graphics core that offloaded graphics onto a custom coprocessor called the Blitter, the first GPU. When the Amiga operating system didn’t supply adequate support for its own GPU, programmers found they could get better performance by bypassing the OS and banging the hardware directly. Do you see where this is going?

Without a bytecode representation Amiga software became incompatible when 3D acceleration was invented. Its native code wasn’t future proof.

You say you’ve been a professional coder since 1998? I got my first 2-year degree in 1995. We’re about the same age. We just need to see things from each other’s perspective.

An approach like SPIR-V would be interesting.

This is a good reason, why I don’t see many hopes with wasm:

Details if you want to waste time -- off topic

Because the browser is a very powerful interface to the internet, this is a reason why browsers work bad, because all want to be the dominant one and impose their solution that is under their control. This is why you see stuff implemented in one browser but not in others, not because the big companies are not capable (of implementing for example the images formats of others companies… e.g. webp), it’s because of a fight for the internet. Google always tried to established itself at the “root”, this is the reason for the existence of the chrome, because it is at the “root” of something used by big masses, and not because they wanted to improve the user experience of people. There are many other places where user experience should be improved… like at the support page of google services and their live chat… that is the worst i have ever saw.
Being at a root of technology… the dominant one will not let it easily becoming less relevant. You can check how youtube was sabotaging firefox while officially google was the friend of firefox…
See how Apple is fighting against vulkan, trying to make it as hard as possible for people to use vulkan on apple…

Check NGinx webserver… open source… even bsd license… so why was it bought for 670 million? Why not simply fork it, if it’s such great technology and it’s BSD license.

Dreaming about what would be possible (or good) is waste of time.

Check how many hundreds of millions of “web developers” are fighting with CSS… to make the dom objects look alike on different browsers. This could have been done easily when all rendering would be done by one and the same lib that would be based on WebGL for example.
[This is why I am considering to skip the dom and to create my own rendering lib based on WebGL2, such that the rendering are looking the same an all browsers].
The W3C could have made some kind “standard-lib” based on WebGL which would be coming with all browsers.
Means the developers of the browsers would have a lot less to do…, just making sure that they are supporting webgl, and not to worry that much about the rendering of the dom elements.

WASM has potential, but I see the development of wasm being delayed by many many years. It could have been easy to make wasm work independely without needing to interact with javascript.

I see the world going more in a splitted direction. Look how all apple devices will be working together (check the lates macOS 12 demo video)… and also the devices of huawei will be similar… this is the direction that is visible.

It makes not sense to dream about technology… if you want to waste your time…at least try to develop something yourself than waiting for it to happen alone (done by others).

That’s why I have done some stuff with wasm, but my enthusiasm about wasm is limited [especially since in the browser it doesn’t work with service workers…and no threads… and in general very limited functionality which you have to fight to import with js …]

At the moment the technology that I find the most promising at the moment are:
vulkan/spir-v, risc-v, eBPF, io_uring

I guess you remember the recent case… when nvidia gpu’s were restricted when running mining code (bitcoin). I think enough companies see an advantage if they can decide for what use cases their hardware can be use. They might want for example that their hardware works just in one ecosystem… if they have more products… imagine they have hardware that somebody else could use in another product that is a competition to one of their own ecosystem. The companies don’t want to sell you a good that, but in fact they prefer to make it more like a rented good for a special purpose [similar to cloud services], even after the sell they want to have control over how and where it is used (and by whom it is used).
A simplified hypothetical example:
Apple isn’t interested in their ultra great keyboard to work on linux… they prefer that you are amazed in such a great way by their keyboard, such that you buy all the ecosystem even if you are not that happy with the other components, but you have no choice since their ultra great keyboard doesn’t work on linux.

In conclusion:
Don’t expect to see easy and flexible access to hardware anytime soon.
Open source “Software” had a long fight… and the open source hardware will most likely take a lot longer. Till then I would not expect to see easy access to hardware.

The owners/distributors of browsers, will not contribute to weaken the power/influence of the browsers (google was not paying without reason billions to mozilla).

I know i said a lot of stuff… fuzzy and in bad english … but maybe it is of help for somebody.

My point is:
First you have to check the business point of view, and only then you check if it makes sense from a technical point of view. The business point of view is so often overlooked, because very few are interested in the business aspect… which has the biggest influence if a technology is promoted or not. That’s why should give more thought about the business part.

2 Likes

Sure, I am very well acquainted with the Amiga. I have (an apparently) non functional A500+ in storage. The Amiga was a closed platform. Any cards that came later were created by third parties. The OCS and ECS were pretty compatible for a reason. I think the AGA was also compatible. If Commodore and Amiga had continued, there would have been a transition. But all the 3d stuff was much later.

The Amiga suffered from lack of memory protection and no real OS at boot time. So every game essentially implemented some of the OS features that were missing at boot. Early games sometimes ran from AmigaOS Workbench, but most ran from custom code. That, and the limitations of 512KiB of RAM on the original Amiga 500, meant that the coders were trying to eek every bit of performance out of the hardware. The chips could run independently, but you used a known API to set that in motion, unless you bit banged them. As soon as people started skirting the API, the game was over, and forward compatibility was never a long term thing.

For me, strong native API’s that don’t require third party solutions - this is the cornerstone to the issue. Adding extra cruft on top just obscures the solution and makes future development harder. Because instead of a clean native API, you end up with one or more hacked in half solutions.

The reason the BeBox should have been the everyman computer was that it took everything the Hobbit based Be Machines did, and removed all the ridiculously over complicated DSP’s and used standard PC style expansion cards. So - basically, it could have been like the Amiga, but they instead made it more like a regular computer. This is why you are here. If they had stuck with the crazy DSP and obscure processor architecture (assuming ATT had not ceased production) you and I probably would never have seen BeOS. And it was more than just that. The original pre BFS file system was also completely unfit for purpose. It was a database. All the file system stalwarts you know, they didn’t exist. Directories and hierarchies were factored in over the top of the actual file system. This meant they had almost zero way to integrate other file systems in tot he OS, because the way the kernel interacted with the file system was so alien. The boot menu used to have an option to “rebuild” the file system database… because without it the machine could go so badly out of sync it basically made the machine unstable and unbootable. All you hear about BeOS and the “database like file system” is that filesystem. BFS has attributes, because the developers wanted to mimic some of the features in the old file system. What BFS does now is nowhere near as complex as the OFS used to be. Edit: Sorry I for got the point - BeOS was great because it was simple. Not because it was trying to solve complicated problems by making the lowest common denominator generic. You really could take the same source code an make it run on PowerPC and Intel. But that was because the API was well formed. It had nothing much to do with the low level stuff. It was only non trivial when people started to try to push gcc - because the metrowerks compiler used under PowerPC was unable to compile some of the more complicated C++ the later versions of gcc could handle. Stock R5’s gcc and mwcc were close to being on a par.

Stuff changes. But honestly, the whole WebAssembly idea sounds like a stopgap to me. It doesn’t sound like a serious solution. But maybe you are trying to convince this PowerPC BeOS user that having multiple processors and compiling an exe for each is not optimal? It is. It’s fine. But when you start talking without considering endianess and how low level hardware access fits in to the kernel and tout “runs anywhere”, I heap a spoon of scepticism on to the pile of doubt I already had.

4 Likes

Complete WebAssembly runtime is probably not useful, but its IL code and IL → native code compiler can be used for user mode virtual machine I suggested before. When program create an area with executable flag and IL code, virtual machine creates another mirror area and statically compiles IL code into native code. Then it remaps addresses to mirror area.

Maybe it is also possible to statically convert all executable sections of ELF file to native code so native executable file will be produced from IL one.

2 Likes

i agree about the WASM runtimes. I mentioned Wasmer as an alternative runtime as a reference as well. If we’re making it OS specific, which I have no problem with, maybe even that might be overkill.

I’ve started another thread to discuss the merits of various bytecodes if anyone is interested.

It should be OS-specific anyway because interaction with Haiku API is required. IL Haiku executables can be executed only on Haiku, otherwise libroot.so/libbe.so not found errors will happen.

2 Likes

I think I may have misunderstood what you meant by “kitchen sink”. I took it to mean mundane and boring, whereas now I suspect you meant “having everything bar the kitchen sink”, meaning something overloaded with unnecessary bells and whistles.

But whatever you meant, I do believe there is a massive potential market for an OS that just does the job, quickly ad without fuss, and without needing users to waste time doing course or wading through manuals.

1 Like

This recompiling is an awesome idea! But it will of course impact startup time of applications