Forum server infrastructure inadequate?

I notice the forums has been quite laggy lately. Also the repositories have noticable peak usage timeframes. Has Haiku outgrown the servers? What do we need to improve? What are the current usage stats? CPU, RAM, bandwidth? Which limits are we hitting?


I noticed this too. Could we not use Disqus rather than rolling our own software?

We’re using Discourse. I don’t think this is so much a software issue as a hardware issue. Ever since beta release, all haiku servers seem noticably slower, especially during peak hours. I have a strong feeling that our server hardware infrastructure needs a boost. We have likely outgrown our hardware.

1 Like

It’s not related to the beta release. We moved our servers from a single dedicated system at Hetzner to something at that uses iSCSI storage (hard disks over network, basically). We did this because it was annoying to monitor the hardware ourselves and ask Hetzner to change the hard disks now and then when they start showing SMART errors.

This means we are dependant on the network load and other servers around us for everything stored on these disks. And they have some maintenance operations or some other server or maybe some of our own traffic causing performance problems. We are trying to investigate this with the hoster, but as you know, our sysadmin team has very limited time. Help welcome!


Which is why I’m asking the usage stats. How intensive is the web presence? Discourse, Trac, Review, repos, etc. RAM, CPU and bandwidth.

If additional resources are required, I have a co-located server not doing much. More than happy to donate a bHyve VM or something.

Can help with sysadmin stuff if needed, seeing it’s been my job title for the past ten years. Don’t see that changing any time soon :\

1 Like

This has nothing with Discourse or the load on the server itself, it’s a pretty beefy server and we rarely hit CPU/RAM limits.

The real problem is that our storage is indeed, as @pulkomandy said, iSCSI-backed. The plan we have from is supposed to give us ~75MB/s, and as we use a 3GB iSCSI cache in RAM, that should be more than enough for our usecase.

The problem is that the more usual speed is 25MB/s, and during the middle of the night to the morning France time (3AM-8AM) which is late evening to night EST (9PM-2AM), the speeds are often radically slower, as slow as 3MB/s. So then all I/O just backs up waiting to flush writes (Postgres, which backs the forums and Trac, is especially hard-hit by this.)

We are theoretically on a higher support tier at (we are paying extra for it, even) and @kallisti5 has a long-running support thread there, but hasn’t got past the first/second level techs which are just telling us to try things on our end, or want us to reboot into maintenance mode so they can test for some indeterminate period of time, which is obviously unacceptable to us.

At this point we’ve practically given up hope of getting a hold of a higher-level tech who could actually fix anything, and started looking for another hosting provider … again. Which would be our 3rd move in a year. We’d rather not do that, but, things aren’t looking good.

1 Like

So our servers are basically running on a NAS/SAN type configuration? That doesn’t sound very efficient. How does the cost of this compare to dedicated hosting? Have you checked out ? I’ve been impressed with them so far. I’ve been thinking about supplying a mirror or two based on their services to help offload some usage from the main servers, possibly in a load balancing setup. This is why I’m interested in what the hardware and bandwidth stats currently are. It would help in sizing a solution.

No, it is a dedicated machine; the persistent disk is just on an iSCSI server.

Using VMs at Vultr or elsewhere would cost 3-4x as much for less computing power (and less storage.)

CPU/RAM are the least of your concerns when serving a mirror. Currently we have something like 200GB of packages, and 600-700GB of nightly builds on the main server. The CPU/RAM usage is mostly by Discourse, Postgres, etc. and we never max out. It’s all in the storage.

Vultr does dedicated servers. No need to have storage off your machine. Starts at $120/mo with 200gb x2 SSDs. This would nip this bottleneck.

What is our bandwidth usage? What is the cost of the server? Knowing what our bandwidth, cost and machine requirements are would either help solve or end the discussion.

We moved away from Hetzner because we don’t want to be managing a physical server. It was annoying to have to watch the disks for SMART error and ask them to replace the disks now and then. This is one of the reasons we went with this solution with externalized storage, not managed by us. We can trust the hosting company to keep the storage working, it allows us to focus on more important things.

Also, as waddlesplash mentionned, the repos are large, 2x200GB does not cut it, you would need a terabyte of storage to start with (plus backups). This is why external iSCSI storage makes sense. It makes it a lot easier to extend the storage space as our repos grow.

1 Like

Maybe it’s just me but for the last couple of days, the default avatars (letters) for those not having defined their own are no longer displayed.

That’s just their base entry level. But the management issues pointed out make sense and make it a moot point.

Option: 1TB disks seem appropriate for local storage/DB access with a dedicated local disk for OS.

1 Like

But as pointed out, this creates a maintenance hassle of sorts.