We still quite don't agree on how this thing should run.
I like the simple approach of the current buildmaster design.
The current system is designed as a script that you just run. This is easy. We can automate it using a cron job, or run it as a git hook everytime someone pushes to the release branch. No need for this REST stuff.
The status is generated as a JSON file while the build is running. It is very easy to render a webpage from it (and the work is done already). Of course the build master has no idea about the slaves status when it is not running - it does not even keep a connection to them. This is a buildsystem, not a hardware monitoring solution. It only connects to the machines when it needs them, which even makes it possible to use machines that are not always online. I ran builds using my development machines, which is possible because the system is designed to not be invasive (for example it does not need you to install any packages on the host haiku - it manages the package it needs by itself).
The build run directories are working just fine with the shell scripts.
So, I think the system is ready to go live. We just set up git hooks to run it and basically that's it. Each time someone makes a commit, a few minutes later the packages are built and added to the repo.
Why so much REST and Docker above this? It looks like just needlessly complex additions to me.
Yes, there are some strange things with the buildslave status, but it looks like this could easily be fixed without rewriting everything (and even without a lot of python knowledge).