Haiku Security

I think that for have a secure system we need to continue with the Haiku philosophy: USER FIRST!

So the installed application must only write their config, logs and so on in their installation folder, if they MUST write in an another folder must prompt the user for insert the password… but I think is not must so invasive as in Vista was… simply the application must avoid to write in another folder!

Another security threat is when an application want sends user information on the net… Haiku must asks the user the permission… the only legit “net” search is when the apps must search for updates… but I think the updater must be system wide so… the application haven’t to go online for this, too!
Obviously an e-mail and a Browser application must go online where the user want :-)))

I’m talking of this in the AppFile discussion… but I think application installation can be a security argument, too.

And to adds another security level (if this is not sufficient) the application sandboxing
may be an option too!

The downside to not writing preferences to a single location in the system is that it makes it nearly impossible to quickly backup all your application settings/preferences. Also, it makes it less useful on a multi-user system.

For example: If I have a few applications, say a mail reader, an internet browser, and an IRC client, and 3 different people use the machine, I would prefer if each of those applications stored their preferences in a user-specific location (e.g. ~/config/settings like Haiku currently does)… same with storage of data - I want my mail stored in ~/mail

This is actually more secure, so that individual users cannot access each others files/data. Imagine the browser stores it cache per user, then each user can only access their own browser cache.

I can then backup (or move) all the settings for a specific user, across all applications at once without having to hunt for them.

If each application is limited to the directory it is installed in, this becomes… painful.

Are we talking about security of the user? or security of the machine?

This is, in my opinion, a bad idea.

For example, I’m currently writing an application that adds a new filetype, and indexes its attributes. While the files themselves may be stored in user folders, nothing requires the user to store the files there; a user may store these files everywhere she wants to, even in system folders. (You said yourself - user first, so if a user wants to store these files in system folders, nothing should prevent him/her from doing it). But this means that my application will constantly monitor all of the system. Surely, using queries and live queries, but anyway, it will be system-wide. Besides, the application will have constant access to index of all BFS volumes mounted. A cherry on the pie, my application adds an attribute to a randomly chosen system file (something like kernel or libbe.so), to prevent illegal distribution of the commercially sold copies and to distinguish between shareware and bought copies. Surely enough, it also accesses the Net, not only to check for updates but for other purposes (syncing with the Google servers, for example, and validating the copy is legal for use).

Ok, I’m a bit exaggerating about constant access to system folders, but you got the point.

Your suggestion will require the user to enter his/her password every time my application accesses global index, system file(s) etc., or at least once - upon launch. As you may understand, this is, IMHO, unacceptable. No-one will enter his password for every application that wants to perform a system-wide search, or connect to the Net, or access other folders except application’s own, even if the password is required only once per session. Reducing the need for entering passwords by creating tiers of applications is useless either, because a malware may sneak into improper tear: either you check every application every time, or you leave the system unsecure.

As a matter of fact, it may be even easy enough to create a sniffer that will acquire a user’s password in this (very legal!) way and send it along with the IP address of the user to a hacker… Thus making the system much less secure.

I hope I didn’t understand your suggestion wrong :slight_smile:

OK Urias, we can think of a list of trusted directory in which an application can write without user password:

~/config/myapp/
~/mail for mail application

and so on… but for example I don’t want that an application says Bezilla write something in
~/config/OpenOffice/… maybe it can’t read in that folder too… it haven’t to exist for it!
A sort of sandbox.

It is obviously that in case of OpenOffice if the user want save a file I can choose all folders… but is the user to do this, not the application itself…

I think we must find a just balance to user security and machine security… at the same time we can’t ask the user password too much as Vista does… if we do that the user click OK without reading it!

This is, in my opinion, a bad idea.

For example, I’m currently writing an application that adds a new filetype, and indexes its attributes. While the files themselves may be stored in user folders, nothing requires the user to store the files there; a user may store these files everywhere she wants to, even in system folders. (You said yourself - user first, so if a user wants to store these files in system folders, nothing should prevent him/her from doing it). But this means that my application will constantly monitor all of the system. Surely, using queries and live queries, but anyway, it will be system-wide. Besides, the application will have constant access to index of all BFS volumes mounted. A cherry on the pie, my application adds an attribute to a randomly chosen system file (something like kernel or libbe.so), to prevent illegal distribution of the commercially sold copies and to distinguish between shareware and bought copies. Surely enough, it also accesses the Net, not only to check for updates but for other purposes (syncing with the Google servers, for example, and validating the copy is legal for use).

Ok, I’m a bit exaggerating about constant access to system folders, but you got the point.

Your suggestion will require the user to enter his/her password every time my application accesses global index, system file(s) etc., or at least once - upon launch. As you may understand, this is, IMHO, unacceptable. No-one will enter his password for every application that wants to perform a system-wide search, or connect to the Net, or access other folders except application’s own, even if the password is required only once per session. Reducing the need for entering passwords by creating tiers of applications is useless either, because a malware may sneak into improper tear: either you check every application every time, or you leave the system unsecure.

As a matter of fact, it may be even easy enough to create a sniffer that will acquire a user’s password in this (very legal!) way and send it along with the IP address of the user to a hacker… Thus making the system much less secure.

I hope I didn’t understand your suggestion wrong :)[/quote]

Hem not offence but or your application is a part of the system (and Haiku’s part haven’t to ask passwords) or well is the type of things that i can’t accept… I can understand it must do queries to add new filetypes but I don’t like it to touch libbe or system files for its shareware controls… is this so necessary?
It can’t create a sort of hash file in the application directory or add this attrribute to itself?
The net maybe is a necessity if you want be secure your application is not cracked but it MUST prompt for password… if it was a malicious software instead of your application the user must known that it wants send something via network… we can’t repeat the windows errors!
Maybe we can ask only the first time or add a “remember my setting for this app” checkbox in the alert… so after the first net access Haiku not ask ever for that application!

What the application can and what it can’t - this question is off-topic.

Here is the genuineness verification design: I, as the applicaiton designer, design it in the way that will be difficult to spoof. To do so, I add attributes to files that surely exists in the system; the attribute’s data is a (big) numeric value calculated from app’s version, buyer’s ID, chosen system properties etc, - a number which is unique enough to identify the customer. The attribute’s name is obscured to make it more difficult to find which attribute controls this data; in additional, the application adds some random numbers in attributes to other files, so the hacker won’t know which attribute is the real check. (Surely enough, there may be quite a few “real” numbers, all calculated in different ways, as well as quite a few of the “barren”, “vain” ones, - it’s not a good idea to put all eggs in one backet). Upon startup of the application, a “real” attribute value is read (randomly chosen from the pool of “real” attributes), and corresponding value is recalculated from the available system data, they are compared. Simultaneously, the value read from the attribute is sent over the Network to a central server, which contains all “real” numbers received from the client upon installation (no user data, of course; the data that composes the numbers is not reversely calculable). Any of these checks may be performed several times upon need. After these comparisons, the program decides if it’s a genuine copy or a pirately distributed one, and acts accordingly. E.g., triggers full HDD erase :-P. Just kidding. Honestly, I see only one way to break this system: edit the binary application’s file and replace the “jump_if_zero” instruction representing the main decision “genuine / hacked” with “no_op” (or “unconditional jump”) instruction. Surely enough, this may be checked also, e.g. by hashing the contents of the application’s file, but this is well beyong the scope of this discussion.

To be sure that a potential hacker does not find these attributes, they are attached to the files which surely exist in the system. These are: the application’s files (duh), libbe.so, kernel file, Tracker file, Deskbar file, B_USER_SETTINGS_DIRECTORY/tracker/TrackerSettings, etc. (I won’t reveal all of them here, but you got the point). This way I ensure the application can’t be portable (see, due to the application’s nature, there’s no meaning for it to be portable) and can’t be distributed freely.

Is this way of action acceptable? Surely it is. I don’t compromise the system in any way, I don’t acquire or store the user’s data, I don’t open any security holes for the attacker to come through, I even don’t modify the system’s binaries. Heck, I don’t touch any file, I modify only the attributes which are not part of the files.

The difference here is between “User is god” system’s architecture to “User is slave”. The first one is Linux way, where any process running with root’s priorities can do anything, including modification of system files. The second way is Windows, where even the Administrator can’t touch some places in the system. However, surprisingly Linux is much more secure then Windows. Doesn’t it mean that Linux’ way of action is better?

I agree that user may be limited; however, I have strong objections to it. But not application. If performing any checks, then base it on user’s account properties, and don’t do it per application. And, of course, allow user to run applications under other user’s credentials.

double post, comment removed.

Alex wrote: Here is the genuineness verification design […]

You can’t bootstrap this from software, try Googling “The Client Is in the Hands of the Enemy” for discussion about a similar problem in Massively Multiplayer Online Games and this family of problems generally.

The generalisation of this problem to our own experience is a topic of epistemology.

On the general topic of security, not any specific post …

Research is in order before we go off into disagreements.

‘Of interest’:

  • Plan9: - has no ‘root’ superuser. A user has been granted a collection of privileges unique to what resources they have ‘plumbed’. Worth a look.

  • OS X: - ‘root’ superuser not enabled by default, nor need an ‘Admin’ be a member of the classical ‘wheel’ group. Major changes, such as installing software, are ‘challenged’ for an Admin user ID & password. Enabling ‘root’ is still permitted. Decent balance of security vs convenience in long term use.

  • Barrelfish: - (experimental) postulates a database of vetted binaries. Not the first to do so, but ‘of interest’ because it is also largely a message-passing creature, as is Haiku.

  • TuDOS: - uses a compartmentalization scheme at very low level that, for example, (should) make it very difficult for one online-banking session to be at all interceptable by anything else the system is doing at the time.

  • OpenBSD: - has protected memory, encryptable swap, and a host of other security gadgetry that does seem to work well. Mind - it ain’t ‘free’ as far as CPU-cycles or disk I/O go, so OpenBSD is generally slower than its cousins - but very stable as well as secure.

  • OS/2 & eCS: - while cousins under the skin to DOS and WinWOES, had a much better security model, despite being essentially single-user creatures, as Haiku is (still, yet).

  • Syllable: Another desktop/integral-GUI risen-from-the-ashes OS (not the ‘Server’ version, which is Linux-based)

BTW - w/r the BeOS heritage and ‘single user’ - the (illegal?) PhOS variant of BeOS was also multi-user, (and not the only such), was it not?

Gotta start somewhere - whether ‘enterprise’ use is, or ever will be, a factor or not.

Many of us have at least one family member (or wannabee) with whom we should be able to share ‘the computer’ without getting into each other’s environments. At the moment, that is supported - but in no way enforced.

No one - not even a hobbyist - really wants to have an OS that exhibits the behaviour for which Ganymede was notorious.

:wink:

JM2CW

Bill

In OS400 (AS400’s operating system), system objects are all derived from a virtual object whose only function is to implement security features.
this approach is very elegant … and effective.
IMO : we should cast a glance on it

Glance ‘with a long spoon’ - as the Nazgul are just about the last folks on the planet one wants to get into an IP-rights tussle with … (see SCO).

Plan9 might be a friendlier model, license-wise…

Bill

I would just like to direct people to this thread about using Sandboxing as a security feature.
http://www.haiku-os.org/community/forum/sandbox_securitymulti_user_idea

Porting Systrace from FreeBSD would be great.

‘Sandbox’ ing, ‘jails’ ‘chrooting’, ‘vkernel’ - even virtualizers as a class - all have valuable places in the grand scheme of things.

But don’t confuse any of those with system-level security. It is distracting.

Security needs to be low-level, pervasive, persistent, more trouble to defeat without detection (it can ALWAYS be defeated somehow or other) than the effort is worth.

Also ‘light weight’, easy to admin, and inobtrusive.

The last three are the challenge.

Others may disagree, but I have no objection to a largish increase in initial boot time, perhaps to set up encrypted storage, memory, or communicaitons services.

Nor would I object to a signficant increase (five seconds or less should be more than enough) in the initial opening of an app where the delay is due to credentials being checked.

I just do not want to forever-after continue to pay a ‘security subsystem tax’. IOW - once ‘vetted’ an app or utility or service should run unencumbered, save for things such as whether it has rights to read, write, or even ‘see’ a specific file or portion of the file system - those being either already in-place or easily made so.

One of the reasons that IPL and app opening is not on my critical path is that idle/resume is already well-handled, and hibernation / save-state can become so.

Many of us tend to run long-lived platforms and sessions, especially laptops with inherent ‘UPS’ when on mains power. The IPL penalty is lost in the ‘noise’ when one boots less than once a week. ‘Sandbox’ and similar overheads, OTOH, may be present at every keystroke, mouse move, packet transferred, or ‘whatever’. If Haiku goes down that road, it is no longer lean. mean, and hungry, and I’m better-off with Solaris or a *BSD, as these at least are already expert at that sort of thing.

JM2CW, but a bit of work on a safe and well-drained foundation does a house more long-term good than adding storm doors or a new set of curtains.

Bill Hacker

[quote=myob]

IBMoid wrote:

As its going to be single user for R1, this is going to be rather difficult…

Just to clarify, R1 did not, in fact ship ‘single user’. There were two:

  • The de-facto MOFU or ‘root’ identity, AKA ‘user’ / Yourself

  • The sshd identity.

Generating keys, enabling sshd, adding (at least) one additional identity, setting passwords and home directory was vanilla Unix, albeit with a bit of Linux / SVR(x) flavor.

Therafter, one can ssh in to a/the new identity, and do so more than once for multiple, simultaneous ssh shell sessions. Ditto multiple on-box ‘Terminal’ shells with different login IDs.

Ergo Alpha R1 is no less multiuser than ‘normal’ - just does not by default insist on login etc.

The only caveat I have run into (so far) is that it respecteth not the /home/userid assignment in ~/etc/passwd, even if such a dirtree has been created and chowned beforehand. Easily fixed, that.

Lazy, I am, so I don’t care if other folks want or use such privilege separation or not.
I am just grateful to find it was already there if/as/when wanted so that I can use it.

Thanks for that…

:wink:

Bill Hacker

There is no privilege separation, its an open question whether Haiku can be retro-fitted with privilege separation, but today it doesn’t have it, just a toy version of the Unix user-group-other file permission rules. Without the rest of the Unix security model, the file permission rules are just a nuisance.

“…an open question whether Haiku can be retro-fitted with privilege separation…”

Eminently do-able, but early days yet, so not sure how hard it would be. Will be 'aving a recce.

So far, the Haiku GUI API (which is not necessarily ‘entangled’ with security, per se for a single-seat, single monitor/kbd/pointer rig), looks to be incredibly clean vs X-Windows.

Remains to be seen if it would be easier to implement a Haiku GUI API layer on some other already-security-aware kernel (Plan9, barellfish, a *BSD … ‘whatever’). Or to work a minimalist, but reasonable security model into Haiku.

Haiku doesn’t have to compete with OpenBSD. Entirely different target.

But neither should it be less secure than NT4 or OS/2.

As said - I’m not for boiling the ocean for those who don’t see the need on their turf.

I’m just after my cup of tea not being easily contaminated.

Bill

Each application should come with a manifest file that specifies what it should be allowed to do. If the manifest file is signed by some authority(e.g. Haiku Inc.) - the app is installed and no questions are asked. If the manifest is not signed and requires for instance system access - the user has to confirm the install, if he chooses he can get a simple list explaining what the app wants to access.

What the manifest file should contain:

  • Whether the app is allowed to read or write to certain folders or files - the app folder, system files/folders, files of certain types, files choosen by the user in the Open/Save dialog…). A sane default for most apps is to allow access only to the files selected in an Open/Save dialog, the files from the application folder, the filetypes associated with the application and the files that the application has created itself.
  • What libraries can it use (it can safely use any libraries with a matching manifest file)
  • What API functions can it use
  • What network resources it has access to (is it allowed to access the Internet, what protocols, on what ports, should it access only certain domains - like an update server, etc.)

‘manifest file’ … see the ‘environment’ division of any COBOL program.

But time has moved on. What is not abstracted is virtualized. A COBOL environment could specify that it required two or three tape drives, and even which models and where and how attached. Other ‘divisions’ then specified what input was required, and what output coudl be expected.

Enforcement, sadly. was largely up to (s)he who submitted the punch-cards - not the often not-yet-existent-as-such ‘OS’.

Fast forward to an editor operating on files that may on detachable media or be NFS or FUSE mounted from half way 'round the world - where you are not really even certain that the underlying storage is case-preserving, let alone UTF-8 (or other).

Bottom line? Expectign an app to provide useful info about what it might want to do, with which, and to whim, is ‘too complicated’ for a lean, fast environment.

But (see barrelfish, among several others) ‘what if’:

  • Each ‘build’ shipped with all executables hash-signed, a table of said hashes compiled into that build’s kernel, or ‘near as dammit’ - e.g. - table loaded by the boot process with the table’s hash compiled in, then ‘trusted’ for the hash of the entries.

– At ‘level Zero’ all such that match are authorized w/o human intervention.

  • Any changes - and there will be many - go to ‘level 1’ where a MOFU [1] has to auth them once at which point they are added to a secondary, but persistent, (encrypted, on-disk) table. This survives reboots.

  • Apps, which devel team may not ever even see, got to level three. ‘Vetted’ when first run by a MOFU [1], optionally lower-level 'Admin/Boner [2] but stored only in RAM, e.g. do NOT survive past the next reboot. The challenge will be raised again.

Imperfect? Absolutely! Willingly stipulated as such.

But simple enough to at least raise a flag if/as/when malware arives at the gates.

At the end of the day, ‘complete’ security is a myth. Even the most hardened systems can still be brought to their knees with use of perfectly sound and fully authorized apps, if only by causing resource exhaustion.

So - better ways exist, I am sure. But 'KISS"

Whatever else is done - let us not fail to take advantage of the extensively documented mistakes and successes of others who have gone before. Dig out and compare the alternatives under a strong light. Weigh cost vs benefit.

Haiku doesn’t need to be fully hardened. Just strong enough that ‘most’ attackers will seek easier targets, and most others will at least raise a flag and be logged.

Bill

[1] ‘MOFU’ Master Of the Finite Universe. Pronunced as in Chinese Taoism, meaning roughly ‘without any (magical) protection’, and used in the vernacular to describe a situation one is powerless to resist or change.

[2] ‘Boner’ Box owner. Interpretation left as an exercise for the reader…

:wink:

I know I’m a noob but as a user there some things about system design I have never understood.

Like how Windows has games and little apps like notepad.exe in system folders rather than placing all those small applications in the program files directory. How Linux scatter files across a billion different directories.

Yes I know with Windows is a throwback to 32-bit and legacy applications. In one sense there the problem how future features might need to change/break the design of the OS. I find Linux really bad at supporting legacy applications. I am often amazed at how many packages need recompiling for each new release for some Linux distributions.

In a sense I am wondering the same thing for Haiku. In the quest for being able to update for security reasons how will that affect legacy applications(breaking libraries or API), the difference between having an evolving OS from people who do updates, the difference between those who do clean installs at major releases and those who are like me if their system is running fine don’t feel the need to update( if it ain’t broke don’t fix it).

I know I might be getting confused with Haiku with it’s API and with packages having their libraries and dependencies in the package as compared to a linux type OS but will you after a few years or a number of major releases be able to find an old package of a program you really want that says something like FOR R2 or R3 and above. I have heard some people have kept old versions of Linux to run a specific program that is no longer maintain and all the libraries have updated around it.

What about, in the future large projects like open office that might update slower than future official releases. If you update a system library for security reasons and it breaks open office are you going to be running two library versions of the file in the system directory. ( I guess this is were virtualization comes in)

Two of my pet peeves with OS’s at the moment is 1: system file creep in which your system directories are ever increasing in size and 2: you really can’t tell where your clean install finishes and what you have added to the system begins because files have been placed everywhere. So we get install/uninstall programs but they don’t catch everything like config files that a program might write when it runs or registry entries.

What I would like to see in an OS is a system folder that represents a clean install locked down to be made read-only. Any changes in config files be written to a administrators system directory, any updated libraries or patches be written to the administrators system directory as well as any third-party drivers etc. Have the OS probe the administrators system directory when it boots use any libraries, config files and drivers that it finds their else use the OS’s system folder. Some people like the ability to be able to copy a programs to anywhere and the program should run. I’d like the same type of feature for the Administration directory. If you were to copy it or back it up you have all your updated libraries, drivers and config files etc.

I don’t know if this is pie in the sky or just a dream but for the OS system directory I would also like something like this.

System
!
!–R1
!
!–R2

Each major release gets its own directory in the system folder. For a legacy application I’d like to be able to point it - if I wish to use the libraries from R1. Each major release is a clean break from the previous release so if there are any major changes you can point legacy programs back to previous libraries and old API. Delete the R1 directory if it’s not needed replace it if you have program that does and in once sence it forms a standard base. A given sets of libraries that are always present on the machine not replaced by updates.

I wonder if any OS can avoid the crawl over the years that has caused Microsoft to decide to put XP under emulation in Windows 7 or how MacOS change to Coca but I would like going to multi-user to offer more to the system than just security. Yes I understand you are recreating Beos but you are already beginning to break with GCC4. I am no developer so am not sure that any of that is possible.

Just thinking out loud :slight_smile:

[quote=Bill Hacker]
Bottom line? Expectign an app to provide useful info about what it might want to do, with which, and to whim, is ‘too complicated’ for a lean, fast environment.[/quote]

Too complicated for whom/what?

The user? - he will not be bothered unless the application tries to do something it’s not supposed to do. Actually the OS can silently refuse access without bothering the user at all.

The developer? - he spends a lot of lot of time to developing/port the application. He can spend another hour writing the manifest. OF course the manifest format must be made as simple as possible.

The OS? - most modern OS already have something similar - SALinux, Windows UAC, Solaris Trusted Extensions… Sooner or later Haiku will have to implement something similar. Because if for instance the PDF viewer happens to have a zero-day code-execution bug - how do you stop it from wreaking havoc with your system?

The manifest can also be generated automatically - whenever an app tries to do something it’s not supposed to - the user can be prompted to add an exception in the manifest. I don’t think this should be available to the average user, but the developers can use it to easily generate the manifest.

Even having manifest only for the most exploited apps - like the browser, media player, document viewers can still be a huge boon for security.