This is what containerisation should achieve, but actually if you look at many Dockerfiles you see they are based on non-versioned tags, or use apt commands to update the machine during the build, meaning that the next time someone runs the build it may have different package versions. In order to be reproducible you need to know the exact machine setup. Otherwise you are open to supply chain attacks.
Another issue is pinning built artifacts (E.g. containers, node.js packages, and so on) against the exact commit they came from. A scan and compare of npm packages, for example, will show that half of common packages built from the claimed commit sha don’t produce the same binary output.
Then you have licensing issues. Alpine linux images are great, they are small, they have relatively few snyk scan report issues - but they use Busybox who are GPLv2 sue-happy. So corporates have to ensure they don’t use alpine as the base for anything either they produce, or in the open source upstream of their dependencies. So I end up with base image bloat. On node this is even hairier - how do I know all the 2089 packages (not a joke) just installed by yarn have a compatible license for my simple REST API service?
I guess my question is, if you were to provide the same experience (if not the same exact approach/config files) of isolating and packaging container images on Haiku, would it be different, and would there be any benefits/problems with the design of such a subsystem on Haiku? I.e. is there a ‘better way’ to do this? Is it easier on a single ecosystem like Haiku rather than the sprawl of Linux?
I’m more interested in the theory of if there is a better way than actually saying we should build it right now. The thought exercise might help in the general approach. Although of course, if we find there is a great benefit to a container style system in Haiku then great, I’ll totally help get involved. I think for now though knowing about what primitives are available would be useful.