I was thinking about multiuser use cases and this popped into my head.
*BeOS/Haiku was and is designed to use SMP lets extend that ideology and improve on it to make Be engineers proud!
*What if the program being used runs too slow even with SMP?!
My answer is that some apps can make use of processor resources available on other networked computers. The best way to implement that would be with a clusterkit so all future Haiku applicatons can share the same interface.
The type of applications this would improve the most are applications that operate on chunks of data for long periods (video encoding, raytracing, folding)
You may have noticed I just mentioned folding here is how I see it working.
I have 3 haiku computers on my network for instance and I install folding@home for Haiku … if it were implemented with a clustering kit all my haiku computers could be folding but only it would only have to be installed on one computer!
Imagine being at a tech show and having 4 or 5 haiku computers rendering a raytraced scene having a specator volunteer thier computer and booting up haiku and attaching to the network automatically that computer could start raytracing as well imagine the impact that would have in peoples minds!
Some things that might be good to keep in mind
*use llvm to recompile code for cross arch clustering and arch specific optimizations possibly also enable OpenCL once gallium3D is available
*keep it simple packet centric design possibly redundancy in calculations for error correction (folding)
*allow users to control available resources to other users with a deskbar applet or desktop replicant
*how to compute untrusted code.
*make it automatic zeroconf?(popup possibly the first time someone else trys to acess your computer’s resources)
*compatibility with other APIs for easy porting (shouldn’t be hard since most of the do the same thing anyway)
*Mironet – less network overhead look into it possibly massive proformance enhancements I don’t know if it can operate simultaneously with TCP/IP
*make sure it doesn’t interfere with normal network operation
*possibly have a high proformace mode with dedicated network
I would like to hear from your point of view as well. Clearly I don’t know everything about how this would work especially zeroconf perhaps this isn’t a good use case for it? Also I would like to mention the reason I want it to be automatic, I have attempted to run WRF-EMS a weather sim model on a 40Cpu pentium 2 cluster after many days of fiddling all we ever got it to run was on the master node which acutally rendered the sim slower than realtime. That just shouldn’t happen ever (this was with about 3 *nix knowledgeable guys on hand and a weather guy). Who knows we might see Haiku on the weather channel some day :-). Note that I am not developing this I have loads of homework to work on :-).