Writing about threads


So again, as I mentioned in the attribute topic, one of the things that I’m doing is writing an article on the advantages of BeOS/Haiku to accompany my Beta review; this article will cover a bit of the feel of BeOS, hey/alert examples, packaging and states, iconography (with an emphasis on HVIF, although Be had unique isometric design), search and attributes (hence my last topic)… and last but not least, threaded applications.

I had noticed the use of threads throughout the source of Haiku, and see their presence in Process Controller in Haiku (and before then, in top on the BeOS), but what is the distinct advantage of the design in Haiku? Forgive me for not getting this already after everything I have read about the advantages of BeOS design, but I want to be absolutely sure that I get this right, so as not to misconstrue the topic. Thanks, everyone, as always!


Mainly prioritizing interactivity, and it’s just the way to do things on BeOS/Haiku… it may actually slow Haiku down on faster CPUs due to the overhead of IPC when a single thread could have been done already if the amount of work was small enough. It’s very hard to benchmark that though on the existing code base.

To avoid the overhead you just have to be careful not to abuse threading… its a good thing when done correctly.

Suffice to say it’s a complex topic.


One main difference is that drawing in other systems is done with a single thread. Where in Haiku each window is a thread. That’s one of the ingredients of the Haiku responsiveness.


Isn’t that mostly not the case on modern OSs and franmworks though?


Yes, mostly not the case - I mean, I’ve had occasion to describe this system to people who don’t know about Haiku, and I get the impression that we have the only platform that does it this way. So there’s an occasional collision with programming language/library models that represent the perspective where threads are a heavy weight danger zone feature, and the concurrency model (if any) is user level threading.


I think it might be unproductive to try to evaluate strategies on the level of, threads are good/bad. There are good ways to make an application responsive at the user level, without using threads, and there are lots of things that can go wrong with threads, but the real value is going to come out in the context of the whole system - e.g., messages, locks, how the app kit uses them, etc.


I remember something in the BeOS Bible but think this article is more modernized:

You may want to blend the info from those sources to bring everything up to date with a dab of pthreads variants in the wild and thread implementations on other like minded OSes.


Offtopic here, but BeOS used bitmap icons, HVIF is an Haiku invention.


And on-topic stuff:

As usual, there is no magic involved in what we do. The APIs on other OS allow to do the same. The difference is we try to make things work this way by default. BeOS and Haiku are one of a few OS where it is easier to write multi-thread code than to write single thread one.

Let’s see with a few practical examples:

  • Thread priorities: with the pthread API, you create a thread (pthread_create) and then you optionally set a priority. The priority is just a numeric value and applications are left on their own to figure something appropriate. With the BeAPI, you create a thread and are forced to give it a priority. A few priorities constants are given, so application developers can know where their chosen priority values are against threads created from other applications.

  • The “one thread per window” thing: it is possible with most graphics APIs to do this, of course. However, it requires writing code to spawn a new thread/event loop, and then attach the window to it. With the BeAPI, there is no way to create a window without a separate thread. This forces developers into thinking about threads even in their simplest application, and immediately get used to deciding what goes in each thread. (note: here I’m referring to the threads on the application side. There is also one thread per window on app_server side but that’s a different matter).


My apologies for not clarifying the format switch in my original post; I wrote things out too fast without thinking. For anyone new to Haiku, BeOS had a traditional icon editor… reminiscent of the resource editor I used to play with on Macintosh, I think called ResEdit. Switching over from the old icons of R5 was one of the cool parts of Haiku (Haiku Vector Icon Format).


Thanks for the reply, but I think my main question is… do threads handle everything? I understand the ‘thread per window’ advantage, but is each operation handled by server? To explain my query, if an app requires net access, does net_server handle it, media_server media, etc. all as different threads? If so, this would be a huge advantage over other operating systems. Otherwise, I’ve misunderstood the way Be and Haiku work (which is why I humbly ask about it here).


The net_server is a different process, it handles setting up connections (DHCP, picking wireless network, etc), and collecting stats (number of bytes exchanged, etc). It is not involved in network traffic, that each app can handle the usual way (nothing fancy here).

The media server handles the mixer node and the audio and video output nodes. In the media kit, each media node must belong to an application. Each application will run its own node, but the “system” nodes had to be hosted somewhere. This is what the media server does, mainly (this is incomplete, other devs with more knowledge of the media parts can complete this).

Threads are something internal to each application, allowing independant pieces of code to execute concurrently (using multiple CPU cores if available). A large number of threads helps because when one needs to wait on something (disk io, acquiring a critical section resource, etc), there will always be something else to run. The downside is that we waste time switching from one thread to another (a relatively costly operation). For example, modern videogames will tipically run one thread per CPU and try to make sure they are never locked. They then share their workload accross the threads without any context switch.


It might be worth mentioning that things like I/O, for example a network request that goes out as an ethernet packet, also bring in the kernel for some of the work, which would be another process/thread. Like any OS.


No, the way these work is that the write call to the socket will not “invoke” another kernel thread, but will actually “become” a kernel thread; that’s what a syscall is.


So I/O is always fully synchronous? Can’t return from the syscall until all the packets are on the wire?


Nice to read, being coding with threads (pthread and w32 threads) i see how this goes, but wonder if some of the good stuff could be put into event model (as in ev, uv libs) to not cause thread creation and prevent some of the context switching.

You should get responsiveness on the user gui part with that model. After all a window, unless requiring a repaint, wont need so much work to get a whole thread for it (again, i understand why threads are present, just my 2cts).


Not sure I understand, are you thinking that one thread per window is ordinarily a performance burden on the system?

There sure have been those who were critical of this system, but I thought the main points were that 1) real threads make applications fragile, because programmers fail to do the locking right etc., and 2) the message apparatus that we use in part to address #1, is a performance burden and/or unreliable, in situations where there’s a massive event load perhaps from the UI.

I bet there is a lot of room for development of interesting techniques to work with multiple threads in a pervasively multithreaded application context like this, to help address #1.


I just meant that 1 thread per window may be too much if the window has not much work to do. As old timers have more knowledge, they may have their pros to do that.

Forgot about the message part and how that may influence btw, better check the code to have better knowledge before talking more.


No, once you get to the TCP layer, there is usually some buffering; and then more buffering below that in the actual network driver. But if these buffers filled up, then it’s possible the syscall would block until there was some space in them.


But whether it blocks or not, after it returns, some other concurrent process processes the data from the buffer to the wire, right?