Sandbox Security/Multi user Idea

Here’s a basic idea I had pertaining on how to implement both multi-user support, and security in one go.

First off, the simple way of implementing multi-user:

  • In the directory /Users/ there are mountable filesystem images. Each is encrypted and has metadata for a password hint, contact information, whether or not the user is an administrator, an encrypted version of the main crypto key for the image, and an icon to represent the user.
  • Early in the boot system an application will present a list of files in /Users/, the user picks one and types a password (which is used to decrypt the main crypto key). The system can be booting what it can in the background to save time.
  • This file is then mounted as /home, or at least there is an attempt to do so. If the password is incorrect then the main crypto key will be wrong and the image can't be mounted.
  • Administrators are prompted in Vista UAC style if they attempt to change files outside of their home directory. Less privileged users are informed that they can't do that.

Next is how to ensure applications behave:
Applications can be run in two ways. The first is the traditional way, with the full privileges of the user (Tracker would probably be run like this). The second is using “application virtualization”.

  • This involves the application residing in a file system image, and all of it's disk reads and writes are redirected to that same image.
  • Upon user login each application's image is mounted on top of the root filesystem.
  • There would also be a shared documents folder where any application can write to the root filesystem, but it can only read/overwrite files that it created (indicated by metadata).
  • When a program attempts to do something for the first time, the user is prompted about whether to allow it to do so or not. Examples include accessing the internet, reading from the root filesystem, accessing other partitions (which aren't handled by the image redirection mechanism), or sending BMessages to other applications.
  • Rules can be more advanced than all or none. Internet access could be restricted much like a software firewall can do. BMessages could be restricted to certain types (Copy/Paste) or to certain applications. Root filesystem access could be limited to a directory or two.

So, for an internet browser you’d want to allow it access to the internet (perhaps add a custom rule to only allow outbound port 80 & 443 and block access to pornographic hostnames), but you would not allow it read access for the root filesystem (breach of privacy). Furthermore, if there was an exploitable security hole all an attack could do would be mess up the insecure application (and access the internet in this case).

In my opinion, this is a very, very bad idea. Here I explain why.

In short - it is not uncommon that an application will need full priviledges, including r/w access to system folders, limitless net connection and monitoring of other mounted drives. There are lots of examples of such applications, antiviri being just one of them.

Sorry to say it, but the kind of system meddling described in your other post is the exact kind of thing this system would be designed to stop. The described program would work absolutely fine, without prompting the user except for (forced optional) net access, but a user could uninstall the shareware application without leaving a trace on the system. Besides, specific rules could be created rather than just allowing full access. Minimal privilages after all.

Antivirus applications tend to be rather poor at playing nicely with other applications. A full disk scan should be possible even with sandboxing (allow read-only or read-write access to the system partition), and the sandboxing should keep any other programs from disabling the anti-virus. That eliminates the need to go to such extremes to protect itself, as the standard user-centric model of security necessitates.

IMHO, it’s extremely uncommon for an application to require unrestricted access to everything. I know that is the case in Windows (Software Virtualization Solution + System Safety Monitor + a software firewall can roughly emulate this system). The very concept seems insecure (modify the system and data files based on info fetched from the internet and other running applications). This security scheme gives users the final say in what applications can do, rather than the programmer. It seems to me that is the very nature of malware defense, and it should encourage more secure and polite programming.

Izomiac, we are on the exact same page!

I was thinking about a way to implement security in Haiku (application security more than multi-user support) and I decided to do a search on “Haiku application virtualization” before I started yelling out “Hey I have a great idea!”

If anybody wants to start developing this after R1 then I am very happy to help.

I think it’s a very bad idea to start introducing multiuser into Haiku. I’d keep multiuser to systems like Linux and BSD.

Please, keep Haiku niffy and simple. Don’t convert it into the Linux I’m forced to use right now… All due respect.

EDIT:

I see this more and more - people want package managers, multiuser, etcetc on every OS out there. I for one don’t like the way Linux is heading. Is every OS out there going to be like Linux in the future? I should probably write something long and deep without just sprouting this out. But I think many people will agree with me (and many won’t) that simplicity was one of the things that made BeOS a superior OS to use. Add complexity and you lose that.

Linux doesn’t force anything it just doesn’t offer good utilities to configure how you want your multiuser setup to work

If I have noticed anything about Haiku is it has good config utilities also multiuser is important from a security standpoint and for many people as a usability feature

BTW Haiku seems to have the beginnings of a package manager but it works more like .msi on windows as far as i can tell (from the users perpective)

If you just kept ditching features for the sake of simplicity you wouldn’t have an OS at all… so instead you have to design the features with a simple to use design

also regarding the OPs comment on blocking pornographic links in the browser dansgardian might be of use there http://dansguardian.org/

I have to say I like the idea of ditching Multi-User as we don’t need another Unix.

As far as application security - application virtualization all the way! For those unfamilliar with application virtualization do not think of this as emulation (like Qemu or VMware) but more of an abstraction layer which controls access to system resources similar to the way that a firewall controls access to a network. This approach to security should also be married into some utility which lets a user make a decision about system resources they want a program to have access to (like a firewall) and a warning system like MS’s UAC.

Maybe applications could be viewed as users of a system? The application (as a user) should only get access to resources it is authorized to use. Any comments?

If the “package manager” is more like a library of installed packages and a way of doing sw installation / uninstallation, then that’s fine. But IMO, having central repositories is to ask for trouble - just like any central beurocracy. You end up with assumptions in user libraries and a system that will be messed up if you happen to install software from outside the repository.

Software installation is still after all these years one of the biggest problemspots on Linux - even though many Linux developers (and some casual users) praise it. This is because it works like magic if you are fine with whatever is presented to you from above, but when you try to install a foreign binary into this “closed ecosystem”, things tend to break down.

As for cataloging software that has been installed with an installer-type of interface, that is fine - and sweet.

When it comes to multiuser, I am just saying: it might mess with the file system structure, and it might mess with the complexity. It doesn’t have to, as long as the implementation is clean.

As for making a sandbox environment for software, please don’t do it if it impedes performance.

Performance-wise application virtualization is less than a 1% performance hit. An encrypted home directory is similar. Firewall/sandbox type software also has an unnoticeable hit (if done correctly). This is based on current implementations on other OSes. Basically, harddisk activity is already bottlenecked by hardware so the extra activity on the software side is insignificant.

What I don’t get is how it is planed to prevent apps from deleting any other app… you can’t do that without true multiuser/*nix privileges anyway as far as I understand even windows does it this way (though they don’t do it well IMO)

so the whole thing is kinda pointless since you will still need to have true multiuser at the fs level

you could make your homedir image idea work but it would be more like an extension to regular multiuser

Multiuser lets the application do anything the user could… including add or remove applications. If you run as a non-privileged user then a malicious application can still toast your home directory (i.e. everything that can’t be reinstalled) while you can’t manage hardware or run certain applications. Privileged users and applications basically have no security restrictions. Making a unique user for every application with minimal permissions is hard to maintain and complicated even in the best of scenarios. Multiuser is good for systems that have multiple users, or for servers where data is less important than keeping the OS and services running smoothly.

As for my suggestion, an application could not touch another application. For one, it couldn’t even detect the other application if both were virtualized. Example: Program A requests a directory listing of /boot/apps. The OS first gets the real directory listing, then adds in whatever Program A has stored under /boot/apps in its own image. The combined list (taking into account files marked as “deleted” from the image) is sent to Program A. Program A doesn’t get to see files that only exist in Program G’s image.

As for a virtualized application deleting a non-virtualized application, this change would be written to the virtualized application’s image, but not to the root filesystem. So the virtualized application thinks it succeeded, whereas all other applications don’t see the change. Or perhaps just give a security error to the application attempting to delete the other application. Either way, there’s enough isolation that no damage is done.

Izomiac you seem to have the idea :slight_smile:

Here is a link to an animation (.gif) which shows a way of application virtualization.
Link to Sandboxie Animation

I currently do not have the skills to be able to write application virtualization in Haiku, but time is on our side because it will probably be a while until R2. This is a project that would be VERY interesting, and as soon as Haiku has adequate security then it really will be a real OS in the modern sense.

There is a few different ways this could be implemented, but I don’t think it has to be totally from scratch because maybe at least inspiration could be drawn from Chrooting which is used in Linux (and other unixes) or BSD jails. Obviously our implementation would be fairly different due to not being multi-user (I think this is something Haiku should stick to) and hook into some type of framework which would allow the user to interact with a nice simple GUI.

As a follow-up from the week before, I think the best solution is to either port Systrace (no, not Strace. Systrace.) or basically clone Systrace.

Systrace can effectively straitjacket a program/process. A control panel type application can easily be made for systrace which will basically compile systrace rules and Systrace already has the ability to create a pop-up window when action is needed.

This sounds like a much better approach than sandbox security ala Sandboxie, Google Native Client or Seccomp in Linux.