- Concept of root packages. Root package list is a list of packages that user need and explicitly requested. Dependencies of root packages are automatically installed. If some dependencies are no longer needed by any of root packages, it will be automatically uninstalled. Automatic uninstall can be avoided by adding package to root packages list. Root packages mechanism allows to keep system clean and automatically uninstall no longer needed packages. It also simplify to aware of currently installed packages, dependencies can be hidden when listing installed packages.
- Ability to install any non-native package without special build of secondary architecture packages. It will be implemented by remapping executables and libraries to
$(getarch)directories if system and package architecture will not match. For example package file path
lib/x86/libbe.sowhen mapped by PackageFS.
- PackageFS contents depending on package of currently executing process. This allows to install any potentially conflicting package versions and each package will see potentially different dependency package contents. It is a bit similar to Nix package manager.
- Non-archive packages. This allows to behave any directory everywhere as package by putting
.PackageInfointo it and adding some special FS attributes. Non-archive package will be mapped by PackageFS like regular HPKG package. This allows to simplify application development a lot. Application can be installed into system directly from sources without making HPKG or copying anything. It also allows to easily install dependencies for compile and run.
- Repository references. Automatic repository link deletion when last package from repository is uninstalled. Similar to root packages mechanism.
- Is this for running 32-bit applications on a 64-bit host?
Yes, but other useful scenarios are possible:
- BeOS compatible gcc2 packages. No special secondary architecture packages will be longer needed.
- Installing devel packages of another architecture for cross-compiling. Package executables are not required to be actually executed in this case.
This sounds very, very useful.
5.: I think there are some scenarios where you would want a repository activated even if you aren’t using any software from it, just to have packages from it searchable from pkgman and instantly installable.
But if you’re talking about subrepositories, that makes sense.
It’s a great idea!
It seems to me that our package system benchmarking runs a bit slower than the newest linux (5.10). At first I thought it was the fault of the system file, but seeing this topic I realized what it is like. The solutions that the X512 recommends are good, my favorite thing is making packages for different architectures. I greet you warmly
I guess that for point 3 either symlinks or something in the filesystem layer can ensure that multiple copies of each expanded package are not required (one for each dependent package that is running)?
Is there a suggestion to transition to using a package specific filesystem? As far as I recall packages at present are just compressed file archives that are extracted, rather than compressed filesystem images that are mounted.
Agree with @Parnikkapore that point 5 would not always be desirable.
2 would be interesting too if we have qemu user mode
.hpkg is mounted, not extracted.
(3) Means that visible PackageFS files depends on package of running executable that calls file system API.
Consider packages A, B, C and D. A require C, B require D. Package C and D provides different versions of
When executable from package A will be running, it will see
/boot/system/lib/libtest.so from package C and when executable from package B is running, it will see
/boot/system/lib/libtest.so (the same path) from package D. If some package E exists that do not require C or D, it will have no file
AFAIK Haiku uses
libsolv for packages dependency resolution, so these ideas are already available, even if not exposed by API.
2 and 4 Should be already possible
1 The packages installed only to satisfy dependence are not automatically removed when their requested package is uninstalled, but a separate
autoremove action exists for such use-cases. The reason it is not automatically applied is it can lead to problems. Consider manually installed package
A, which depends in package
D. In installation of
D is installed as well. Now consider package
B also depends on
D. When installing
D is not reinstalled, it just remains available due to
A. Now if
A is uninstalled together with
D, the package
B gets broken (the system knows
D is installed for
A, not for
B). Of course, this can be made more automatically, but in this case all dependence hierarchy of all installed packages should be rebuilt in any package installation, which tends to be very complex (and in most cases unnecessary) task.
3 I don’t think I really understand this point. There can be much more tricky scenarios where the system should choose which version of conflicting files / libraries / packages to choose, and this can lead to hidden problems, which are very difficult to debug.
In this scenario, isn’t putting conflicting versions of
/boot/system/lib/libtest.so inside packages
B better than inside
(3) means that each package will see only files of packages it declared as required. Package may request specific version of dependency that is mutually exclusive with other installed packages dependencies. It guarantee that any package can be installed without conflicts.
This will eventually lead to putting all dependency files into package itself and abandoning dependency system that will increase package file sizes and maintenance cost (all libraries need to be installed to each package).
At least in the original implementation this was not true. Packages were simply extracted, although they act as though mounted/unmounted this is not the case. I don’t remember hearing that it changed. How many mounts are listed when you type
mount in terminal? If packages are implemented with filesystem images that are mounted it should be one for each package.
I understand the idea just not the desired implementation. It could be done by filesystem permissions, by using different search paths and symlinks, or by a specific filesystem implementation, I guess?
This is indeed an interesting idea. I think an in-depth research is necessary to see:
- It can be done without unnecessary heavy dependency tree walking,
- It will not lead to inconsistency of any kind.
Maybe you, @x512, already have a working draft of implementation. In this case you probably also have seen possible dangerous scenarios.
HPKG were originally designed to be mounted by PackageFS kernel driver without extraction. Single mount point is used to mount all packages in a single “packages” directory. There are 2 package mount points:
I saw this looking now. But packages are not really themselves mounted in the normal sense, this is some overloading of the term, I think? In fact they are just extracted onto the relevant “packagefs directory”, right?
No. They are mounted on the fly. Just mount command not necessarily know about it.
They are not extracted. Files are accessed directly from inside the package without extracting it. So, it is closer to a mount than an extraction.
You still need some things to be globally accessible, right? For example if I use gcc (which is in the gcc package), I want it to access all libraries I have installed, and not have to rebuild the gcc package to add new dependencies everytime I want to use a new library, so it depends on it and can see the .h files?
The idea is possibly interesting but it may also become quite annoying if not done right.
Overall I think I prefer the initial design of the packagefs: don’t do the super complicated dependency management needed by crazy Linux apps. Haiku apps usually are simple and have few dependencies, and the Haiku libraries have reasonably stable ABIs, so you don’t need to keep older versions around. Personally I would rather spend my time on that (ensuring stable ABIs, providing a good set of things in the core OS so apps don’t need to pull in many libraries) than making the package system easier to use for ported Linux software with unstable ABIs that relies on the Linux distributions to recompile everything all the time. (It is just my personal opinion and what I want to spend my time on, we can have both the nice stable ABIs and small native apps AND the package manager supporting the Linux things).
Ah, that’s the answer I was looking for. So they are not really filesystem images and are not themselves mounted in the normal sense of the term or using normal filesystem mount/unmount methods.
It helps to know this in the context of this thread, because it puts into perspective how any of this can be achieved.
You can think of it as a dynamic unionfs where packages are added and removed to the union. In the packagefs code, the term used is “activation” (and “deactivation” for removing a package). We have used “mount” because it is close enough and people generally know how that works. It isn’t 100% accurate, but it’s more accurate than “installing” or “extracting”.