I do not profess to be a either a hardware or software expert. I know Haiku is threaded OS. And I know when I run Haiku on my single core Atom there are two virtual cores. When AMD realised its “cheap” 6 core and then recently its “cheap” 8 core there was talk that most software couldn’t make use of all those extra cores. But I am right in thinking that Haiku would just soak up (as needed) and exploit all that extra grunt because of its architecture. Not sure what you would do with all that extra power.
I think Haiku provides all the low-level means for apps to scale by use of application threads, scheduled independently on different cores, that e.g. Linux, Windows and Mac OS X does. (I’m not sure if we have a core pinning facility yet, so as to avoid core hopping and its cache-related performance loss. But that’s more of an optimization.)
On most platforms and in most programming languages you need to actively design applications to be multithreaded (or whatever other concept of concurrency is given), for the apps to be able to run concurrently on multicore processors. Haiku is no exception. I don’t know of any silver bullet to concurrency. (But some say that functional programming languages lend themselves better to concurrency.) Many language support concurrency and provide APIs which are usually voluntary.
The Haiku API forces a dedicated thread per window shown, accompanied by a drawing thread in the app_server, but that falls short of any real scalability. Generally speaking you want calculation-intense parts of your application to be carried out by worker threads, and for these to be mostly independent from other threads like the main app thread and any user-facing window threads, so as to not make windows hang visibly or be unresponsive. For any non-trivial application you want to use worker threads and it’s up to you as developer to design the app to decide at runtime on the number of threads suitable for the given amount of cores. Additionally the algorithm you want to use - for the actual work carried out by the threads - may be suitable for an odd or an even number of cores, or a power of 2 core amount. Some multithreading strategy may be less ideal on e.g. a tricore or a hexacore, if the algorithm has to ignore the last odd core or the two last cores above the last power of 2 core: 4 in the hexacore case.
From a hardware point of view, not all architectures and cores are created equal. A core with Hyperthreading is not equal to 2 independent cores of the same kind, since the two hyperthreads compete on the same, shared core. They’re not truly concurrent. Some non-x86 cpu architectures have better hardware threads that provide better performance, or a larger amount of soft threads per core.
A lot of the press about minuscule performance gains from multicore has been related to games. It’s a matter of code bases and age. If it was written recently it’s likely to have been designed with scalability in mind. Older code needs to be rewritten to take advantage of multicore.
Another way of answering that question is that Haiku as currently compiled, will scale to 16 processors. To really use all those processors concurrently an application needs to be written to divide up the work. I rewrote the standard Mandelbot demo to work with 4 cores:
I wrote another program that creates multiple instances of itself across multiple cores:
I personally think that the Haiku API lends itself very well to using multiple cores. There is a lot of promise here but ultimately we need people to write new applications for Haiku that use multiple cores. We also probably need to make some updates to the kernel to optimize it for multiple CPUs. The current OS scheduler does not have an option to pin a process or thread to a particular core. This is also known as affinity. When a process bounces between cores the OS context has to be stored and restored each time and this is inefficient.
The future for Haiku is bright for multiple cores, but we need more programmers coding for it.