I’m looking of any documentation related to async network programming under Haiku.
Unfortunately I can’t find any that explain how to organize fully async multi-thread … say client application.
For example, I want to develop a client application, with one additional thread to receive and send (and connect/disconnect of course) RAW data through TCP connection to a remote server.
Host protocol is not relevant.
Such application should have following properties :
Handle async connect and gracefull disconnect.
Not killing communication thread on app exit.
Send data and control operations between app & communication thread
This thread to wait for messages (socket or app) on only one place, no matter if socket is in connection operation or receive or send or gracefull disconnect or else.
I’m aware of Haiku messaging,so question here is about socket (or whatever it is called).
For example in windows I have a meaning for thread to wait for multiple events, some of which can be from sockets, some control messages from host application.
NetworkKit is very poorly described (of what I’ve found), Unix/BSD socket library by default is ugliest thing in the world…and more importantly its separate lib from Haiku NetworkKit.
At best one can mix posix threads with socket library to accomplish this.
So I’ll be pleased if someone have documents that explain in more detail how to use NetworkKit (or else but not socket/pthread lib please) to accomplish this.
My recommendation is to use the same Berkeley socket interface you’d use on any UNIX related platform, wherever possible. That’s as good as it gets on Haiku, plus it will be portable. It’s C code. Ugly is in your head, a mental problem for you to deal with.
To wait for multiple events, look at wait_for_objects() in kernel/OS.h. I’ve never used it, that I recall anyway. That won’t put all your event I/O “in one place”, if I understand what you mean - a normal application will dispatch message events to multiple threads - but as I understand it, you can use this function in a thread that manages the socket I/O while communicating with other threads via messages.
We all know that polling mechanism is worst technique ever.
It waste CPU time, and it is not responsive.
As for “wait on one place”, I mean to wait for messages from Haiku & sockets in one system (or else) function call in a dedicated communication thread.
Otherwise I have to use polling, which I don’t want.
Or block on socket read and close socket from other thread when I need to stop this thread…or worse, to kill it directly.
Both variants are not gracefull closure of socket.
Moreover I’ve read that BeOS (don’t know if Haiku too) select() can only be used for read operations.
If wait_for_objects() works, this will be great.
However I’ve also read that sockets are not file descriptors in Haiku (as in linux/unix), hope this function really works.
Am I the only one who understands the need for such a function ?
How do you organize your gracefull closure of sockets and reader threads?
This is common problem I think.
And UNIX/linux socket library does not provide good solution for it either.
Of course if you make students work of open->write->close socket lib is great.
In Linux I’ve used signals to unblock communication thread ppoll() blocking call, which was again pretty ugly, but it works.
select() was a tendency to loose signals, between calls for select() & disabling signals (forgot function name), that’s why ppoll() have atomic handling of enabled/disabled signals while entering & leaving it. But this is not available here.
Well another possible solution is to open 2 sockets, one control one, and one real.
I can’t imagine the overhead of this.
May I suggest you revise your knowledge of socket POSIX API, because select() is not polling, it’s wait-for-fd-events-or-timeout. It does not waste CPU time, except if you consider that pratically every Apache Web servers out there are wasting CPU times polling their Internet link
Under non-BONE flavor of BeOS, select() used to works only with read events, indeed, and sockets were not file-descriptors too.
But these shortcomings were lifted on Haiku since long ago already: we do have full support for read, write and error events and sockets are file descriptors as expected.
Plus select() is both public and very portable API, while wait_for_object() is not really public and Haiku only.
I strongly suggest to use the most portable, used and known API to implement your network protocol, which is sockets POSIX API, period. Your network code will be plateform independ, always a big plus.
And there are zillions of sample codes using socket API to take inspiration from, a tribute to it’s usefulness.
Last but not least, you’ve pratically answer yourself:
in a dedicated thread, wait on socket(s) events with select() or [p]poll(), which are not wasting CPU time contrary to what you think,
close the socket(s) to gracefully stop that networking thread: it will wake up select()/poll() call, as expected, letting your code tearing up the thread.
To put briefly, simply do synchronous (aka blocking) network code in a dedicated thread, and close socket file descriptor(s) to control his lifecycle.
I think fil knows that select() doesn’t poll. The reservations about it are somewhat understandable if he’s been reading about BeOS, but those were the bad old days. Haiku’s implementation comes from FreeBSD if I remember right, and is part of the POSIX file descriptor system, so that all works like it’s supposed to.
When I read “Am I the only one who understands the need for such a function?”, I thought “what are you talking about, why do you suppose Haiku has such a function, if only you understand the need for it?” But he somehow managed to anticipate your reply, inasmuch as here you declare that you fail to see the need for it.
wait_for_object() is the sort of function that of its very nature can be implemented for Haiku only. It’s a bridge between the various I/O event types that can occur on the platform, including platform dependent ones. If you’re writing an application that has no Haiku API components, then you don’t need it. If your application’s socket thread only needs to hear from other threads under very simple and restricted circumstances, maybe you don’t need it, since I suppose you could equally well have it open a second socket for internal communications. But this is apparently what that function is for, and why I believe the Microsoft API has a similar function. It’s far more convenient and general to allow the “socket” thread to receive messages on its message port like any other thread.
In my opinion, this general approach to thread interaction is much more robust. When you have to close the socket from another thread, you’re intervening asynchronously in the socket thread’s business. I/O in progress? who knows. When you can leave any “state” in a program to the exclusive domain of one thread, and let the threads interact via I/O, that allows all threads to conduct their business uninterrupted.
As for signals … I would not recommend signals, for any use whatever in situations like this. I understand that to be the opposite of robust.
Unfortunately wait_for_object() is an experimental Haiku feature, so there is currently no guarantee that this function and associated structures will remain as they are in subsequent versions, or indeed from one day to the next.
That’s all right with me, but of course you should make up your own mind before using it. It has been there for years, so to the extent there’s any experiment in progress, it can’t have been going all that badly.
@don - this is what I meant. EXACTLY
You can’t mess up with socket from several threads…this could lead to loose of data and is bad design too.
Unfortunately socket lib is designed to work that way, more or less, that’s why I don’t like it much.
Also if you are developing server, if you do not use “gracefull shutdown” (this is a well known socket technique) of all connections, after shutdown of server you can’t start it for some time, because listen socket is placed in TIME_WAIT…etc…(don’t know if this is true for Haiku)
I’ll use also Haiku GUI components, and don’t really care about compatibility.
I care about good design and robustness only.
10x for replies, was a good discussion
I hope one day Haiku to integrate this “wait for multiple objects” function in its NetworkKit design.
But it also means that, then, the networking thread must expose its state-driven design somehow. Which leads, usually, to have some finite state machine API, and a thread-safe one, that can be called from others program’s threads, ideally the controller one(s).
While Haiku’s could, maybe (I didn’t check) wait_for_objet() on both socket(s) and messaging port(s), it won’t be portable and rely on an experimental API, not a public one, mixed right in the middle of an highly portable networking POSIX socket code.
What could be done is to use a pipe fd (or a loopback socket, but it adds the network stack overhead then) to send commands to networking thread, and using select()/poll() on socket(s) and this command pipe’s fd. Writing to this pipe should be made thread-safe, though, if multiple threads may send command to the networking thread.
Historically, if Windows SDK introduced WaitForMultipleObjetcs it was not because of multithread issues at start but, au contraire, due to single-threading issue, aka how to wake up the event loop not only on windows’s message event. And Windows Sockets API was not POSIX compliant then, IIRC.
But that single-threading issue is just what we’re talking about: the need to respond, in one thread, to events from a message port and also other I/O. If Windows sockets are POSIX compliant now, I can’t see how that has changed anything - POSIX certainly will not replace this functionality, as Windows message events are outside its domain. The UNIX world now has something similar via FreeBSD, kqueue(), but even if Haiku were to add (or already has?) a kqueue API, it would naturally be its own nonstandard variant, just like MacOS X kqueue has an option for dealing with Mach message ports that’s absent in the other BSDs.
Regarding TIME_WAIT state … this should not normally be a problem, but be sure to use SO_REUSEADDR where appropriate.
The POSIX - and at large Unix - is promoting that every ressource can be accessed as a file, via a file descriptor. You can perfectly single-thread a loop which wait on sockets and/or commands, until they’re using file descriptor transport. Standard input, output, error streams comes to mind, but you can create any other file descriptors to communicate with a device or, than to pipes, others programs or yourself.
I don’t see how it’s not as valid way to handle such needs as any other non-portable API to do the same.
The issue at point, here, is how much you want your code to be platform specific. Or not.
PS: +1 for the SO_REUSEADDR for an static socket address service that should be restartable ASAP…
This shouldn’t come as a surprise, I think, but just for laughs I tried to use select() on a message port, and of course it doesn’t work – not being a POSIX fd. Conversely, of course wait_for_objects() did work.
The problem I see with using pipes, or sockets, to control communication thread is that they are just slower (in comparison with windows events technique), and you must add considerable code overhead.
If we use pipes, we need 2 pipes (they are unidirectional right?) to implement synchronous operations such as “quit”.
If we use sockets…well your communication message (say just control command 4 bytes) need to pass through all TCP (or UDP?) stack! This is unacceptable for me for internal application communication.
However lets say it is ok for a client application that uses one or several connections to different servers. But what about you are developing a server, which should be both responsive and handling many connections…
Another technique in Linux is network super daemon, that accepts new connections and start process for each new connection.
Well, then comes the problem with synchronizing all these processes to do a common job.
As we know communication between processes are slow, with a lot system calls.
I’m writing client/server applications for a long time and honestly I don’t like Lunux/Unix approach to async sockets techniques at all.
How much slower are we talking about? Presumably you have done some benchmarking in this space. Haiku should probably fix whichever problems make its pipes and sockets so slow compared to other systems where they are often the fastest IPC mechanisms.
Within a process you can hand code faster lockless message passing (e.g. using atomics and a circular message buffer), but if you’re expecting to wake a sleeping thread then you’re already paying for a context switch and probably need to stop sweating the small stuff. You also seem reluctant to write extra code, so this option might rule itself out for that reason.
Traditional Unix systems provide a lighter weight socket family for this purpose, sometimes called the Unix domain socket, which does not incur any overhead from a TCP/IP implementation. Nevertheless in practice the difference is small, and I begin to wonder if you have measured it at all.
[quote]Another technique in Linux is network super daemon, that accepts new connections and start process for each new connection.
Well, then comes the problem with synchronizing all these processes to do a common job.
As we know communication between processes are slow, with a lot system calls.[/quote]
I think you’ve confused two quite different designs in one sentence here. A super server, like the traditional inetd, is listening on many ports and starts appropriate daemons only when a client first connects to that port. This is a way to avoid wasting resources, if you don’t SSH into a particular machine very often why always have a SSH server running? The super server design averts your earlier concern about re-using server ports, because the “listen socket” is owned by the super server, which doesn’t exit when the associated server shuts down. But you seem to have muddled this with a fork-first approach like that seen in (typical configurations of) Apache or OpenSSH where every new connection is given its own process. OpenSSH does this for security reasons, while Apache does it for performance reasons. But in both cases they’re relying on the fact that relatively little writeable information is shared between connections and thus a whole process is affordable. For something like an IRC server you wouldn’t take this approach. Again, “slow” is relative here, how “slow” have you found it to be and what’s an acceptable limit?
In Windows a communication thread is usually blocked in one place, where you wait for a multiple events.
“Event” is a mechanism (much like mutex or semaphore or whatever) that allows you to “wake up” threads (with a single function call) without ANY additional overhead by serialization (in user space AND kernel space) or else. While simultaneously waiting for socket events in same functions (ANY socket events).
About how it’s faster, I haven’t explicitly try.
Just saying that it is much more elegant, fast and with much less code to write.
Thus makes it much more error free.
About second part, yes I was talking about “inetd”.
And I was just trying to explain that this is not good approach for server solutions either.
However, it has its advantages as you noted. But it solves only a private case, when you don’t need to sync much separated processes.
I agree that the overhead of a socket for internal communication isn’t likely to be significant in the larger picture. I would be a little concerned though that we can’t really say for sure where a heavily used network service is going to run into significant resource problems, unless unbeknownst to me someone’s already using Haiku this way. If your unusual application exposes some weaknesses in Haiku, I’m sure the core developers will be all over it, but it could still be an awkward setback. Don’t mean to put you off, but it’s the kind of thing that’s happened on every platform, and no reason to imagine Haiku’s any different.
Well, you can skip the serialization step and use the suggested control file descriptor (a pipe, ideally) only as a way to wakeup from the select() / poll() call.
Write your message to an Haiku port then write a magic “you got a message!” command on the control pipe.
In your networking thread, on valid pipe command read, do a read_port() to retrieve the actual message and handle it within the same thread. That way, you can have up to 256 distinct kind of wakeup events without carring about pipe write integrity in a multithreaded context…
Combining select()'s POSIX high portability with platform’s best IPC at the same time
Sure, it’s not as clean as a portable WaitForMultipleObjects(), but the fact is that there is no such portable API.
It’s kind of an interesting approach, but it seems potentially fragile – are there circumstances where you’d get nothing on the message port, even though the peer thread managed to send its byte on the pipe? If you try to finesse that with a timeout on read_port, how long would you wait before you’re past the reasonable possible elapsed time between the two?
I don’t know if these are real problems, guess it depends on the application, but it just seems like a cumbersome lash-up, just to avoid a non-portable solution to a problem that’s inherently non-portable anyway. But it does look more or less workable.
write_port() being a synchronous call, if you write the “wake!” byte to the pipe after it there is noway that the networking thread could wake up but fail (or block) on read_port().
What could be a problem, though, is if you write to that port faster than your networking thread can wake up, read it and handle it. But, then, you’ve a bigger issue, as it’s not more a asynchronous networking controller situation, but a networking producer / consummer parallelism design you need.
It’s not only to avoid a non-portable code in an otherwise POSIX portable one, but to avoid to rely on some experimental, possibly unstable and most probably still evolutionary API, beside non-portable, at the core of your networking code.
If you’re targetting specifically Haiku, don’t care about portability (B_DONT_DO_THAT! ) and keeping with a possible moving target, then just go for the wait_for_objects() instead of select().
If it’s not, well, you get some alternatives relying on far less clean API - no doubt - but highly portable and solid since way longer.