How to kill the haiku project

The Let’s become root! [ was: Re: Berlios? ] thread on the development mailing list is a very dangerous idea.

This would place all of the haiku on-line presence on one server. This is the only way that I can see that haiku could die. Imagine the source code repository, the bug tracking system, the mailing lists and the Haiku website all going down at once due to a catastrophe at the hosting company, loosing all the data including 100gb back-up.

This could kill the project. We must to insure that no one server or service has all of our data and back-up data.

There’s no easy way to count how many people use it regularly, but in the past some Haiku developers have said they use Git rather than work on Haiku’s Subversion repository directly. Git users have a copy of the entire repository history, because that’s how Git works, so it’s hard to lose any source code if your developers use Git.

Projects like Haiku don’t die sudden deaths like that anyway, they just slowly fade away when people lose interest. The idea that it might die despite continued interest from developers is just as silly as the idea that it must inevitably succeed if people stay the course.

The point is that this is a single point of failure. No venture, be it a business or non profit, can afford to build in a single point of failure. Too much is at stake.

For a company the big problem is essentially your customers can’t rely on you for some days / weeks / months while you put things back together and that means you lose customers and THAT means you go out of business. In most cases the IT failure is subsidiary to other problems (e.g. a warehouse burns down, the stock control systems are lost, but so is all the inventory, by the time new stock is available customers have found an alternate supplier)

And so in practice small companies build single points of failure all the time. It’s just an aspect of risk management, doing something which is likely to be fine but has a tiny risk of causing the company to fail is usually better than being paralysed by fear and doing nothing.

It’s much less serious for Haiku, which is merely a volunteer software project. If Haiku was rebuilding for a week or even a month after a disaster scenario, it wouldn’t be that big a setback in real terms. For those who are using Git it wouldn’t even mean a temporary halt to development.

Even if the project shifts to a dedicated server, I don’t think the repos at Berlios are going anywhere. If anything, the two repositories will be kept in sync.

At least that’s my understanding.

I think you have valid concerns Jim, since I myself have dealt with such a catastrophic failure at a previous job. We had 2 out of 3 disks on a RAID 5 array die at the same time on our dedicated web server…very unlikely but it happened. We were able to rebuild and the company did not go out of business from it (though later it did for other reasons, maybe because I quit :wink:

Anyhow I also agree with what NoHaikuForMe says. Given the nature of our project something like the web server going down isn’t going to kill the project. The heart of the project is the code, and like NoHaikuForMe says many of us are using Git (at least partially), so there are several full clones of the repo with all history. In fact Travis Geiselbrecht is maintaining a Git repo online at http://git.newos.org/?p=haiku.git;a=summary which seems to be kept totally up-to-date to our SVN repo at Berlios.

Still it would be a pain if we lost the web server, bug tracker data and whatever else might be on a new dedicated server. So I think as part of our new plan for hosting we will consider some redundant backups of the most important data. Also we plan to use independent VMs for each of the services, which could be independently backed up and quickly restored in case of catastrophe.

We will also make sure our service provider has a good record since this really is supposed to be their job, right?

Finally if anyone in the Haiku community has extensive experience in these areas any advice is appreciated.